Operations and Security

Engagement Operations and Security

TrafficSim

RISC Networks’ TrafficSim engagements require the use of RISC Networks’ virtual appliance to simulate and evaluate the performance of real time traffic on the network. Virtual (or physical) RN50 appliances must be deployed and register with the RN150 appliance. These RN50 end points operate as the termination points of a simulation. 

CISCO Unified Communications

The Cisco Unified Communications Analytics engagement requires a Unified Communications Manager AXL username and password. Although this username and password combination can be a user within the ‘Super Users Group’ only ‘AXL API Access Group’ is required for CUCM 5.X and later. It is recommended to setup a temporary AXL user where possible which then can be deleted at the
completion of the Analytics engagement.

Cisco Unified Communications credentials entered on the appliance web interface are encrypted and handled in the same manner as Windows and SNMP credentials. All are maintained on the virtual appliance for the duration of the assessment and until the virtual appliance is deleted.

Traffic Analytics

RISC Networks’ Traffic Analytics module is used to capture actual network traffic and report on traffic profiles within the network. The two methods of deploying traffic analytics, embedded and virtual appliance based.

Embedded Traffic Analytics involves the deployment of Cisco NetFlow within a Cisco environment. This deployment is done via SNMP Write strings which are required in order to deploy embedded Traffic Analytics. RISC Networks does not support user deployed NetFlow configurations. Cisco NetFlow technology provides accounting records only for traffic. No user traffic is captured. Only a record of the
traffic (source and destination IP, source and destination port, protocol, bytes, duration, etc) is available.

Virtual appliances capture traffic through a span port on a switch. The virtual appliance does not record any user payload information for use in its analysis. Deep packet analysis that is required is done on the virtual appliance itself as part of a protocol decoder and is used only for statistical analysis. For example an HTTP GET followed by an HTTP 200 OK message would represent the duration of a web site download. This level of analysis may be performed by the virtual appliance but the details of the web page itself, including user input data or return data, are not reported to the virtual appliance for processing. The raw captures of the details are overwritten every 5 minutes on the virtual appliance and are permanently lost after power cycling the virtual appliance.

Data Center Analytics

Data Center Analytics are included in your IT HealthCheck assessment. These add VMware inventory and performance data as well as Fibre Channel inventory and performance data as additional data sets. For VMware, RISC Networks utilizes the VMware published vSphere API in order to collect information from vCenter and individual ESX servers. For ESX servers, the root password is normally required to access the vSphere API. Access to the vSphere API can be tested by pointing a web browser to: https://x.x.x.x/mob

This URL will return a login prompt that will verify the credentials required to access the vSphere API. RISC Networks does NOT use root credentials to log onto ESX or vCenter servers. The API is the only access that RISC Networks has to the VMware environment.

SNMP is used to collect information from Fibre Channel switches. RISC Networks does NOT directly access the Fibre Channel network via taps or any other sniffing tools. SNMP read-only access to Fibre

Channel infrastructure is required for RISC Networks to collect information.

Application Socket Collection (Netstat) for Microsoft Windows Platforms

The section covers Netstat Application Socket Collection from Windows platforms during a RISC Networks engagement. Netstat Application Socket Collection is optional, however the data is critical CloudScape engagements and strongly encouraged. Application Socket Collection relies on the netstat utility, and the terms are often used interchangeably. From here on it will be referred to as Netstat Application Socket Collection or simply as Netstat.

What is Netstat?

Netstat is a utility for reporting statistics from the operating system network stack. This utility has been implemented for virtually every major operating system, including Microsoft Windows NT, GNU/Linux, Apple OS X, BSD, and Oracle/Sun Solaris. Most implementations can report on a wide range of statistics, but the most common use of the utility is to provide a list of listening ports and/or active connections.

More information on the Windows implementation of netstat.exe can be found here

The specific use of the netstat command is as follows:

netstat -anop TCP
        -a:        Displays all connections and listening ports
        -n:        Displays addresses and port numbers in numerical form
        -o:        Displays the owning process ID associated with each connection
        -p TCP:    Filter results to only TCP sockets

How is Netstat Being Used?

Information from Netstat Application Socket Collection is used by the CloudScape platform to provide visibility into workload dependencies and application-to-application communication. Similar collection is performed for Linux/UNIX servers using the SNMP or SSH protocols, however the Netstat Application Socket Collection process is specific to Windows as the WMI service does not provide these data natively.

Netstat Application Socket Collection is an add-on component of the existing Windows performance information collection, and utilizes the Windows credentials already in place. No additional credential information is required.

How is Netstat Collected?

Netstat Application Socket Collection data is retrieved using the netstat.exe utility on Windows. This command is executed and the data is retrieved through one of two facilities, the current and legacy methods described below.

How Will This Affect My Environment?

The assessment process has been designed to have as little impact in the environment as possible. During the performance collection phase, devices are polled for data on a minimum five minute interval. The inclusion of netstat collection using the legacy method involves a bits-per second increase of approximately 1-2%.

Netstat Collection

The RISC Networks RN150 Virtual Appliance uses the DCE/RPC and SMB protocols to communicate with Windows devices to execute the netstat command via cmd.exe, and to collect the results of the command.

The netstat collection process uses TCP port 135 for DCE/RPC communication, and TCP port 445 for SMB communication. The process needs access to the DCE/RPC and SMB protocols in the Windows firewall.

If encrypted data transfer is enabled for SMB3 on the target system ADMIN$ share, the results of the netstat command will be encrypted during transmission from the remote system to the collection process on the RN150 Virtual Appliance. If the remote system shares are not configured for encryption, the data will be transferred in plain text. The DCE/RPC communications and Windows authentication is conducted over an encrypted channel as per the protocols.

A temporary plain text file is created on the remote Windows device during the lifetime of the process, which is removed on completion. The process does not leave any permanent artifacts on the remote system. Unlike the legacy method described below, this method does not involve a custom Windows service.

Details

An SMB connection is negotiated first with the remote Windows device, authenticated using the provided Windows credentials. The highest SMB protocol version, including SMB3, SMB2, and SMB1, supported by the remote device is used to open a connection with the ADMIN$ share.

A DCE/RPC connection is then opened to deliver the command to the cmd.exe utility. The command issued redirects its output to a temporary plain text file in the ADMIN$ share, where the name is the timestamp of the current time prefixed by two underscores and suffixed with a random number, eg, ‘__1497992728.46’. The final form of the command executed on the remote Windows system is:

cmd.exe /Q /c netstat -anop TCP 1> \\127.0.0.1\ADMIN$\_filename 2>&1

The /Q flag to cmd.exe turns off echo, the /c flag indicates that the remainder of the line consists of a command that should be executed, and the instance of cmd.exe should then terminate. The components following the netstat command indicates that the output stream of the command should be redirected to a file in the ADMIN$ share of the local host (in this context the remote Windows device), and that the error stream should also be redirected to that file.

The temporary file contains the results of the netstat command. The file is then opened via SMB, and the contents collected. The file is then removed from the remote Windows system.

Legacy Netstat Collection

The RISC Networks RN150 appliance utilizes the winexe program to collect netstat information from Windows hosts. This is an open-source utility for GNU/Linux designed for executing commands on remote Windows hosts. Information and source code for winexe can be found at: http://sourceforge.net/projects/winexe/

The RISC Networks RN150 appliance includes winexe-1.1, compiled from the official source code package available at the URL above. This includes Samba version 4.0.0alpha11.

Currently, the winexe utility only supports version 1 of the SMB protocol. Due to security concerns around this protocol version and the Microsoft recommendations in regards to it, we have replaced this collection method with a new method described above.

The winexe program executes commands issued from the RN150 appliance on remote Windows hosts and returns the command output to the appliance. In order for this to work, winexe will install a service on the remote Windows host. This service, called winexesvc, is transmitted to the remote host as a component of the connection, where it is temporarily installed and started. The command is then run on the host and the output is sent to the appliance. The winexesvc service is then stopped and removed. The transmission, installation and removal of the winexesvc service uses the SMB protocol and the Windows Service Control Manager.

The process can be audited from the Event Viewer in Windows. Access the Event Viewer and select Windows Logs, then System. The process produces three event logs:

Netstat info slide 1

Log Name:System
Source:Service Control Manager
Level:Information
EventID:7045
Description:A service was installed in the system. Service Name: winexesvc Service File Name: winexesvc.exe Service Type: user mode service Service Start Type: demand start Service Account: LocalSystem

Netstat info slide 2

Log Name:System
Source:Service Control Manager
Level:Information
EventID:7036
Description:The winexesvc service entered the running state

Netstat info slide 3

Log Name:System
Source:Service Control Manager
Level:Information
EventID:7036
Description:The winexesvc service entered the stopped state

The winexesvc service is resident on the remote host for several seconds only. While it is running on the host, the process entry can be seen by accessing the Task Manager.

Netstat info slide 3

SSH Collection Module

The SSH Collection Module documentation has moved. Please click here to view documentation on the SSH Collection Module

Appliance Proxy Support

The RISC Networks Virtual Appliances require outbound communication from the customer environment to the RISC Networks NOC. In cases where all outbound communication from the customer environment is required to pass through a proxy, the Virtual Appliances can be configured with the proxy parameters for this communication.

Currently, the proxy feature is only available for the RN150 appliance. The feature supports non-authenticating configurations and Basic authentication configurations. NTLM-based authentication is not currently supported, but is on the roadmap.

Configuring the Virtual Appliances for Proxy Support

When the appliance is first booted up, it will attempt to utilize DHCP to obtain an IP configuration. It will then test communication with the RISC Networks NOC. If DHCP is not available or the communication with the NOC is not successful, the user will be presented with the Interfaces section of the appliance dashboard. This section allows the user to set or modify the IP configuration of the appliance. For proxy-enabled appliances, the proxy configuration can be set from this section. Once a valid IP configuration is set and the appliance is able to communicate with the NOC, the user will be redirected to the Login page to authenticate and begin configuration of the appliance.

When setting a proxy configuration, browse to the Interfaces section of the appliance, then select the “Edit Proxy” button. This will open the dialog for setting the proxy parameters. Once the parameters are set, select the “Submit” button to apply the configuration. The dialog can then be closed. To validate the communication with the NOC, the “Refresh” button can be selected, which will perform the communication validation and indicate the results of that test.

If the proxy settings need to be modified, the same steps should be performed. When the Proxy settings dialog is opened, the current proxy configuration will be displayed. Any values that do not need to be modified can be left as-is, and will be applied alongside any modified values. Once the submit button is selected, the updated configuration will be applied.

If the Proxy Address field of the configuration dialog is empty when the form is submitted, the application will interpret this as a request to remove the proxy configuration. This can be utilized as an easy method of removing the configuration if the proxy is no longer needed or desired. Please be aware that removing the proxy configuration in an environment where the connection must be proxied may result in the appliance becoming unable to communicate with the RISC Networks NOC. Always be sure to select “Refresh” from the main Interfaces page after applying a change to the proxy to validate that the appliance can properly communicate. Proxy Configuration Values

Proxy Configuration Values

The proxy configuration dialog allows setting the following values:
  • Proxy Address
    • The IP address of the proxy server.
  • HTTP Port
    • The TCP port on which the proxy server accepts HTTP requests.
    • All communication to the NOC is conducted over HTTPS, however to ensure full support for various proxy server configurations, this value is provided as well.
  • HTTPS Port
    • The TCP port on which the proxy server accepts HTTPS requests.
  • Username
    • The username for Basic authentication.
    • This should be left blank for non-authenticating proxies.
  • Password
    • The password for Basic authentication.
    • This should be left blank for non-authenticating proxies.

Troubleshooting Steps

If the appliance is unable to communicate with the RISC Networks NOC following the application of a proxy configuration, a support ticket can be opened through the web portal or by sending an email to help@riscnetworks.com.

When opening a support ticket regarding the proxy feature, please provide the following information:

  • Proxy software in use, for example Squid 3.5.22
  • Is an authenticating proxy in use, and if so, what type of authentication
  • Any error messages shown in the appliance interface following an unsuccessful communication test
  • Any relevant information from the proxy software logs

Database Module – Preview

The Database Module has been released as a preview feature. To run the module, customers should submit a request through their RISC Networks Customer Representative or directly through support at help@riscnetworks.com

. The purpose of this document is to describe the steps necessary to configure the RISC Database Analysis Module and provide documentation of the way the module will access your databases.  We currently support performance and connectivity analytics for MySQL, MS SQL Server, and Oracle Database.

What is the Database Module?

Database servers are often one of the most complicated nodes in a network. The database module collects information about the usage of each schema within your database, and integrates client connections into our overall connectivity architecture.

How is database data collected?

Unlike most RISC data collection modalities, the Database Module does not have an automated discovery process and must be manually configured. In order for collection to take place, the DB host information and an account with adequate permissions must be manually provided. RISC recommends that a temporary dedicated user is created and used for analyzing the database. The required permissions for the temporary user is outlined here. RISC also recommends that the user is removed after the RISC assessment ends. The RISC assessment gathers metadata relating to database and schema usage. The only user-specified data that RISC collects is database hostname, schema names, and table names. The queries that will run against your database will vary depending on which DBMS you are using; a complete list of the queries in each case are provided here. Oracle Database users: please note that due to the way Oracle manages its dataspaces, we do collect usernames in their role as database schemas.

Using the Database Module

  1. First, you will need to contact support (help@riscnetworks.com) and request that the database beta feature be enabled.
  2. When you receive confirmation that the feature has been activated, go to the RN-150 Dashboard and locate the “Additional Credentials” page at the bottom of the list.
  3. From the Credential Type drop-down menu, select “database.”
  4. The request Entered connection information for your first database server, and hit “Add.”
  5. Enter the server’s IP and hit “test.” If the test is unsuccessful, press “cancel” to verify that the credentials were added correctly and retry. It may also be necessary to verify that the provided
  6. Enter and test connection information for each individual database server you would like to have analyzed. Oracle cluster database users should enter each server in a cluster separately and provide a direct connection to each server.
   

Account Permissions:

MySQL
The account provided must have SHOW DATABASES and SHOW PROCESS privileges. It also requires select privileges on *.*
MS SQL Server
The account provided must have VIEW SERVER STATE, VIEW DATABASE STATE, and VIEW ANY DATABASE permissions.
Oracle Database
The account provided must have select privileges on V$INSTANCE and V$SESSION as well as on the following DBA tables : DBA_USERS, DBA_TABLES, DBA_INDEXES, DBA_OBJECTS, DBA_SEGMENTS, and DBA_LOBS.

Queries run by the database module:

The specific queries run during the course of inventory and performance analysis are specific to the DBMS.
MySQL
select @@hostname h, @@version v

SELECT SCHEMA_NAME FROM information_schema.schemata

SELECT host, db, command, state, time FROM information_schema.processlist

select db, count(distinct(user)) userCount from information_schema.processlist group by db

select count(distinct(user)) userCount from information_schema.processlist

SELECT *, unix_timestamp(create_time) ct, unix_timestamp(update_time) ut, unix_timestamp(check_time) cht FROM INFORMATION_SCHEMA.TABLES
MS SQL Server
select SERVERPROPERTY ('ProductVersion') v, SERVERPROPERTY ('MachineName') h

SELECT name, database_id, create_date FROM sys.databases

SELECT name s FROM sys.databases

SELECT conn.client_net_address,

conn.client_tcp_port,

sess.status,

sess.last_request_start_time,

DB_NAME(sess.database_id) AS db

FROM sys.dm_exec_sessions sess

LEFT JOIN sys.dm_exec_connections conn

ON sess.session_id=conn.session_id

WHERE sess.is_user_process=1

select DB_NAME(database_id) as db, count(distinct(login_name)) userCount from sys.dm_exec_sessions group by DB_NAME(database_id)

select count(distinct(login_name)) userCount from sys.dm_exec_sessions

select
t.name as tableName,
s.name as secondarySchema,
datediff(s, '1970-01-01 00:00:00', max(t.create_date)) as createDate,
datediff(s, '1970-01-01 00:00:00', max(t.modify_date)) as updateDate,
max(p.rows) as RowCounts,
sum(a.total_pages*8) as totalSpaceKB,
sum(a.used_pages*8) as usedSpaceKB,
sum(case when i.index_id < 2 then a.data_pages*8 else 0 end) as dataSpaceKB,
sum(a.used_pages*8)-sum(case when i.index_id < 2 then a.data_pages*8 else 0 end) as indexSpaceKB
from $schema.sys.tables t
inner join $schema.sys.indexes i on t.object_id = i.object_id
inner join $schema.sys.partitions p on i.object_id = p.object_id and i.index_id = p.index_id
inner join $schema.sys.allocation_units a on p.partition_id = a.container_id
inner join $schema.sys.schemas s on t.schema_id=s.schema_id
group by t.name, s.name
Oracle Database
SELECT HOST_NAME H, VERSION V FROM V$INSTANCE

SELECT username FROM dba_users u WHERE EXISTS (SELECT 1 FROM dba_objects o WHERE o.owner = u.username)

SELECT MACHINE, PORT, SCHEMANAME, STATUS, COMMAND, LAST_CALL_ET FROM v$session WHERE username IS NOT NULL

select schemaname DB, count(distinct(user)) USERCOUNT from v$session group by schemaname

select count(distinct(user)) userCount from v$session

select

table_name, owner, sum(decode(type,'table',bytes))/1024 tableKB,

sum(decode(type,'index',bytes))/1024 indexKB, sum(decode(type,'lob',bytes))/1024 lobKB,

sum(bytes)/1024 totalKB, sum(num_rows) numRows, max(last_anal) last_anal,

max(created) created, max(updated) updated, max(tbs) tablespace,

sum(decode(type,'table',bytes,'lob',bytes))/1024 totalDataKB,

sum(decode(type,'index',bytes,'lobidx',bytes))/1024 totalIdxKB

from (

select t.table_name table_name, 'table' type, t.owner, s.bytes, t.num_rows,

t.last_analyzed last_anal, o.created created, o.last_ddl_time updated, t.tablespace_name tbs

from dba_tables t left join dba_segments s

on s.segment_name=t.table_name and s.owner=t.owner

left join dba_objects o on t.table_name=o.object_name and t.owner=o.owner

where s.segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION') or s.segment_type is null

union all select i.table_name table_name, 'index' type, i.owner, s.bytes, 0 num_rows,

null last_anal, null created, null updated, null tbs

from dba_segments s inner join dba_indexes i

on i.index_name = s.segment_name and s.owner = i.owner

where s.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')

union all select l.table_name, 'lob' type, l.owner, s.bytes, 0 num_rows, null last_anal,

null created, null updated, null tbs

from dba_lobs l inner join dba_segments s on l.segment_name = s.segment_name and l.owner = s.owner

where s.segment_type in ('LOBSEGMENT','LOB PARTITION')

union all select l.table_name, 'lobidx' type, l.owner, s.bytes, 0 num_rows, null last_anal,

null created, null updated, null tbs

from dba_lobs l inner join dba_segments s on l.index_name = s.segment_name and s.owner = l.owner

where s.segment_type = 'LOBINDEX' )

group by table_name, owner

CISCO Discovery Services

RISC Networks utilizes Cisco Discovery Services (CDS) in order to obtain more specific information regarding Cisco infrastructure at a customer site. RISC Networks , Cisco and Cisco partners respect that customers are concerned about their privacy and network security and may be apprehensive about allowing an engineer to use a network assessment tool to discover data from their network and subsequently upload the data to Cisco using Cisco Discovery Service (CDS) for data analysis.

Cisco and RISC Networks have implemented several mechanisms to ensure customer data security. In addition, you will be required to accept an “Authorization to Proceed” (ATP) agreement before RISC Networks will upload data to Cisco CDS. An ATP helps ensure protection of customer data and specifically prohibits the dissemination of such data, providing assurance that neither Cisco nor RISC Networks will share or divulge customer data. Customers should be advised that data will be used only for the purpose of network analysis.

Show Commands used for Cisco Discovery Services:

  • Show version
  • Show inventory
  • Show diag
  • Show hardware
  • Show module
  • Show IDPROM all
  • Show mls qos
  • Show mls qos interface
  • Show mls qos interface statistics
  • Show policy-map interface
  • Show running-config
  • Show configuration

Once inventory data is collected by RISC Networks, if requested by the customer, it will be uploaded to Cisco Systems’ CDS application at the following URL: https://wsgx.cisco.com.

Transferring the Data – If utilizing CDS for analysis, customer network data is transferred from RISC Networks’ virtual assessment to Cisco using a secure HTTPS protocol to an internal Cisco CDS web service gateway where it is processed to provide detailed EoX, PSIRT, field notice and service coverage analysis.

Before transferring data to Cisco for analysis, passwords and security credentials are stripped from the data. To view a list of password scrubbing commands, please click here. SNMP data does not contain passwords or other sensitive configuration information. Instead of using IP addresses or host names to identify a device, a generic Device ID will be assigned. After processing, the analyzed data is transferred back to RISC Networks virtual assessment onsite at the customer using the same secure HTTPS protocols. It is then uploaded to the RISC Networks’ NAC as part of normal data upload procedures and used to generate the network analytics reports.

Storing the Data – The raw discovery data and analyzed XML report data are stored in a secure Cisco database behind Cisco’s firewall. The data is accessible only to CDS administrators for troubleshooting purposes. Other Cisco personnel may have limited access to high level transaction reporting that does not include customer inventory details.

All data is stored and eventually archived unless purging is specifically requested by the customer. The customer’s data is only accessible by Cisco or the partner who initiated the engagement.

Data for “Know the Network” (KTN), or service coverage reports, if requested, is also securely stored in
Cisco databases behind the firewall. KTN reports are available only to the engineer who initiated the
engagement and the Cisco service account team.

Purging Customer Data from Cisco Databases – The data obtained in the discovery process and uploaded to Cisco for processing can be deleted from Cisco’s database if requested by the Customer.

If service coverage reports were requested, the KTN data and reports need to be purged separately from the KTN portal (http://tools.cisco.com/ktn/). KTN data can be deleted from the report view.

Partner-Specific Security Issues – Discovery information will not be sold or distributed to anyone outside of Cisco, or used for direct marketing purposes.

Advanced Setup (NAT Configuration)

**If your SNMP access is restricted to certain IP addresses, and you have a server that is included in that access (IE a physical server running existing monitoring software), you can run the appliance in VMware player or VMware Workstation on that server with NAT configuration so it will be seen as accessing the appliance through that host.

Setting the virtual appliance to use NAT with VMware Player & VMware Workstation

VMware Workstation

  1. Edit virtual machine settings
  2. Highlight Network Adapter
  3. Select ‘NAT: Used to share the host’s IP address’
  4. Select ok
  5. Power on the Virtual Appliance

VMware Player

  1. Select ‘Player’
  2. Select ‘Manage’
  3. Select ‘Virtual Machine Settings…’
  4. Select ‘Network Adapter’
  5. Select ‘NAT: Used to share the host’s IP address’
  6. Select ok
  7. Play the Virtual Appliance