top of page


53 items found for ""

  • Hints and Tips for Netskope Operationalization

    Introduction The purpose of this article is to highlight issues and recommendations related to the production implementation of a Netskope solution, based on our experiences using the technology in the field. The article is not intended to replace the existing Netskope documentation, but rather to supplement it with ideas that should be considered when embarking on an implementation project. This paper is not aimed at beginners with Netskope. It assumes an understanding of the Netskope architecture, including both cloud and on-premise components. The Netskope Tenant Tenant Architecture One of the earliest decisions that you will need to make is whether to have a single tenant implementation, or whether to have separate development and production tenants. You need to consider whether you regard the Netskope solution as a network architecture component, or whether you regard it as a cloud-based application solution. Users who are of the former mindset will generally be used to working with firewalls, proxies, and similar components, and will not expect to need a development environment. Rather, they will be inclined to make policy changes directly in their production system in much the same way that they would make firewall changes.  However, it should be noted that the level of policy configuration available within Netskope can be quite complex, and making an incorrect change could cause regulatory issues if certain violations are no longer being reporting due to an error. Also, incorrect configuration might cause applications to be erroneously blocked. Policies can be defined to test a specific OU/Group/User, which allows for testing before implementing them across all users. There is another caveat to be aware of. Netskope currently does not support migrating policies between tenants. Therefore, you would need to manually copy the tested policy from one environment to the other, it you were to use separate tenants. This is clearly a potential source of human error, and so it is recommended that a second person should review the work of the person who performs the production update. (Note that a migration script is planned, and should hopefully be available later in 2018.) Netskope recommends that the two-tenant approach should be limited to specific use cases. At Cedrus, we are inclined towards taking a cautious approach when risking production problems due to development activities. Any production update should follow normal production change control procedures and approvals even if there is only a single tenant. This is an area that does not have a clear-cut answer and requires careful planning on the part of the user, irrespective of the approach taken. Introspection Introspection is used to examine data at rest within supported, sanctioned applications, looking for policy violations, malware, etc. Configuration is performed within the tenant, under Settings>Introspection. Documentation is available within the tenant for configuration of supported applications. The configuration process may involve enabling API access within the target application, and then granting access to Netskope as a client to that API, using the OAuth protocol. One critical area that needs to be considered is the granting of appropriate privileges to the service account that is set up to run Introspection. In general, you should follow the “least privilege” model. If you are planning to perform Introspection against SalesForce instances, you should be aware that the Netskope documentation recommends cloning a Sys Admin user, which would be drastically over-privileged for the task that it needs to perform, and thus could be a security risk. If the account were to be compromised it would have super-user level of access, and as it is a service account it is not necessarily easy to audit individual user activity. Based on our experience, a Sys Admin account is not necessary, provided the appropriate permissions are set on the introspection user profile. The required permissions for the introspection user are: API Enabled Manage Chatter Messages and Direct Messages Manage Unlisted groups Password Never Expires View All Data View All Users Similar issues are likely to crop up for other services, so this is an area to spend time evaluating. One example is Microsoft SharePoint, where a Netskope app needs to be installed in the SharePoint catalog. This app is visible to all SharePoint site admins. You will need to communicate to them what it is, so that they do not attempt to install it in their own instances. Forensics Forensics is a way of providing additional information to the operational analyst who is researching an introspection violation. Without Forensics, the incident management system will list your Data Loss Prevention (DLP) violations, but won’t show you the actual data that caused the issue. This is because Netskope only stores metadata, and never your actual user data.  When you set up a Forensics profile you will configure a storage location within one of your cloud storage applications, such as Box or OneDrive, that you have already configured for Introspection. Netskope will then store additional data in that location to augment incident management. You do not need to have a separate Forensics profile for every service that you are introspecting – you choose one, and store data for all other services there. For example, your DLP violations from SalesForce might store their forensic data in Box. Note, however, that even with Forensics enabled, there may be insufficient data available to the incident response team to fully evaluate if this is an actual violation or merely a false positive. (Forensics stores a small amount of the data from around the triggering attribute, but not the whole document).  In that case, the analyst will need to review the full document that triggered the alert. You can download or preview the file using the Netskope UI. Reverse Proxy Reverse Proxy is one of Netskope’s models for inline monitoring. It adds specific value for its ability to intercept interactions between an unmanaged client and a sanctioned cloud application. This is achieved by making modifications to the single signon flow using either SAML 2.0 or WS-Federation protocols. However, it is only useful for a small percentage of cases. Very few services are available for integration, and even for the ones that are, only HTTP traffic is supported. This makes Reverse Proxy of minimal value for an application like Office 365, where the bulk of activity will be made using either native or synch clients. It’s best to consider this as a niche solution for certain corner cases that cannot be addressed by any of the forward proxy options (such as the Netskope Client, the Secure Forwarder, or Proxy Chaining), or as a supplemental solution in addition to a forward proxy. It should be noted that Reverse Proxy will be bypassed if the user is already being monitored by a forward proxy connection, so there is no conflict is setting up both. On-Premise Components On-Premise Log Parser Configuring the OPLP is relatively straightforward and well documented, so there is not much to add here on that front. It is recommended to stream syslog events to the OPLP so that you have an up-to-date view of log data in your tenant, and minimal manual intervention necessary. From a continuity and reliability perspective, it’s hard to justify making the OPLP redundant. If it’s down for a day and needs to be re-built, the impact is minimal. Discovery data is typically looked at analytically, rather than at a transaction level. If data is not flowing to the tenant, the administrator will receive notifications and the problem can be addressed. One area that should be looked at closely is the content of the log data. We will talk in more detail about the governance of the system later, but the operations team that is managing the Netskope product will need sufficient data to be able to report back to business divisions when anomalies or unsafe behaviors are discovered. To do so, they may need data that is not present in the log file, such as a user’s email address, location, management chain etc. There are ways to enrich the log data, many of which were addressed in a previous blog post. We will briefly touch on them again in the next section on the Active Directory integration capabilities of Netskope. Active Directory Integration This topic is covered in detail in the following blog posts: Using Active Directory with Netskope (Part 1) and Using Active Directory with Netskope (Part 2). As mentioned in the previous section on OPLP, it is important to look at how complete the data is that is available in the Netskope tenant and, if necessary, enrich it. These two papers will provide information on a set of tools that help you do that, including the Directory Importer, the AD Connector and the REST API. As with the OPLP, the AD tools do not necessarily need to be made highly available, as the impact of downtime is minimal. Forward Proxy Options We discussed in section 2 the use of Introspection and Reverse Proxy for policy enforcement. However, both options only cover a small number of scenarios, specifically for sanctioned applications that are currently supported by Netskope. To address policy and DLP for the mass of applications requires a forward proxying solution that enables the bulk of network traffic to be routed through the Netskope cloud. There are three main options to accomplish this: Netskope client Netskope Secure Forwarder virtual appliance Proxy chaining Which of these options makes the most sense will depend on several factors, including existing network infrastructure such as firewalls, web proxies and DNS services; the availability of device management capabilities; and your tolerance for installing software on managed endpoints. Proxy chaining is probably the simplest solution, but may not be an available option depending on the web proxy in use. For example, it is not available for the Zscaler cloud-based proxy. Both proxy chaining and the Secure Forwarder only address traffic that flows through the corporate network, either originating from on-premise devices, or via VPN. Neither one will see “direct to cloud” traffic from remote devices, either home laptops that do not connect via VPN, or mobile devices. The Netskope client is more comprehensive, covering any managed device, whether connected to your network or not. The only DLP gap then becomes unmanaged devices. This can be partially mitigated by using Reverse Proxy for sanctioned applications, but Netskope itself cannot address the end user who connects direct to a cloud service from an unmanaged device and shares data to an unsanctioned application. Addressing this problem requires a more holistic approach to data security, addressing the ways in which data could be sent to the unmanaged device in the first place. Typical ways in which this can happen are: a user emailing a file to themselves a user sharing a file via an unsanctioned cloud application a user connecting a device via a USB port The email problem can be addressed by blocking unsanctioned webmail at the firewall or web proxy, and by monitoring sanctioned email traffic using Netskope or native Email DLP. File sharing traffic can be addressed in a similar way using Netskope DLP. USB lockdown is a well understood problem. integrating Netskope as part of an overall data security strategy can greatly enhance your ability to prevent leakage of privileged or regulated data. Operational Issues Like any other major security component, Netskope requires ongoing care and feeding. As part of the implementation, it will be necessary to address the operational aspects of the ongoing process lifecycle. It’s likely that a new operations team will be formed that specifically supports the Netskope tool and the governance and management processes that surround it. Solid communication to major stake holders so they know what to expect, and what is expected of them, is key for a successful operation. Roles and Responsibilities Netskope provides certain built in roles, which include Tenant Admin and Delegated Admin. A Delegated Admin is similar to a Tenant Admin, but does not have the ability to add new Admin users, create roles, or make a small number of other changes. It’s also possible to create custom roles and assign them permissions. You can also integrate with your existing Single Sign-On (SSO) solution to enable role based access control. Netskope supports SSO using SAML2.0 with the Service Provider (SP) initiated flow. This also allows for the use of multi-factor authentication with the Netskope console. You will likely want to have at least two users defined as Tenant Admins, so that they can back each other up. You might also want to have Delegated Admins if you have a large organization and you need to have regional or distributed control of certain aspects of the system. If you want to provide access to aspects of the Netskope console to users who are not full time operators of the product, but who need to see, for example, reporting or analytical data, you can create custom roles with just the necessary permissions. For example, you might want to give restricted access to users from Risk Management, or from the operational areas of Business divisions. It’s also possible to limit the data that the non-Admin users can see: Some data can be obfuscated if necessary. This includes User information, Source location information, File information and Application information. Limit the scope of the user population that is visible to certain OU’s, Groups or Individual Users. Governance Processes The specific operations processes that are necessary will depend on exactly how a customer chooses to use Netskope. Here are some typical processes that might be used. Application Discovery and Remediation This is the ongoing process for using cloud application discovery in Netskope to drive the application evaluation process for in-use, non-sanctioned applications (a.k.a. Shadow IT). This will need to include a prioritization component to ensure that applications that are of high risk are addressed promptly. A typical process might look something like this: The Netskope Operations team will determine which category of cloud applications has the highest priority. For example, Cloud Storage might be regarded as the riskiest area. The Operations team will use the Netskope reporting tool to create a prioritized list of unsanctioned applications, based on a combination of perceived risk and volume of usage, that are in use within the organization for the selected category. For the highest priority application, the Operations team will use the Netskope reporting tool to create and prioritize reports (for each business area) of the user activity within those applications. Reports will be delivered to the appropriate Risk officers or business area management for further investigation Once the appropriate teams have completed their investigation, the application can be removed from subsequent reports by use of the Netskope “tagging” capabilities. An application may be tagged as “Sanctioned” if it has been determined that there is a valid business case for its usage, and satisfies the organization’s vendor and system security controls and standards, respective to the data being used. If an application is not regarded as important enough to the organization to sanction its usage, but the Risk team is satisfied that the application usage is not harmful, the application can be tagged as “Ignored” so that it may be left off future reports. If it is determined that ongoing monitoring is needed, the application can be tagged as “Monitored”, to allow for it to be removed from triage reporting, but included in ongoing usage monitoring reporting. These may be temporary designations to allow a business area to migrate to a sanctioned alternative. Applications which are not safe for continued usage may be blocked, either immediately, or after a grace period. The above process will then be repeated for the next most critical application. Incident Management Various types of incidents may be discovered based on data captured within the Netskope tool. The categories of incidents include: User Behavior Anomalies Compromised Credentials Malware General activity alerts, including policy and DLP triggers For each of these categories, the Operations team will extract the relevant data from the Netskope console, and distribute it to the teams that need to take action. This might use an internal ticketing system. For example, a compromised credential might trigger a password reset, while DLP violations might involve both business group management and the Risk team. DLP Rule Tuning DLP rules are used to find certain type of restricted, regulated or privileged data inside documents or requests. Often, to avoid false positives, it is necessary to look for data in certain formats, such as a social security number or a credit card number, that are located physically close to a text label that identifies the data. Without this, every nine-digit number will trigger a personally identifiable information check, even if the number is something completely harmless.  Typically, you will use a rule that looks for that data within some number of characters of the identifier. If too many false positives are found, you might want to tighten that restriction. If you feel that you are possibly missing valid hits, then you will loosen it. But there is no absolute rule as to what the correct value is, this is a trial and error process. As a recommendation, it’s easiest to tighten the restriction when you have too many false positives. There is no obvious way of knowing if you are missing items, so it’s best to start on the generous side. Summary This article covers a wide variety of areas related to taking a Netskope solution from concept into production. Based on our experience with the product, we have tried to document some of the issues that you need to be particularly aware of. We hope that you find it helpful! To learn more about our Netskope services, please click here! Paul Ilechko | Senior Security Architect

  • Using Active Directory with Netskope (Part 2)

    In Part 1, we discussed why you would want to integrate AD with Netskope, the AD integration tools Netskope offers, and briefly touched on the Netskope’s REST API. In part 2 we are going to dive deeper into using the REST API to add additional attributes and provide a sample Powershell script that provides additional automation capabilities. Keep in mind that this document is not intended to replace the existing Netskope documentation, and will not cover the implementation details for the Netskope tools, as that is already well defined. Netskope REST API and Directory Importer As was mentioned in part 1, not only does the Netskope REST API offer a variety of options to query data from Netskope, but it also provides the ability upload custom user attributes from a CSV or TXT file as long as it is in a specific format (Netskope documentation on file format and limitations can be found at Administrators > Administration > REST API > Add Custom User Attributes).  While this sounds eerily similar to what the Directory Importer is capable of (with a little additional configuration), using the REST API provides a few distinct advantages. Let’s take a real world example.  Your web proxy uses an employee’s email address as a user key in order to correlate traffic ‘events’ to a user, and you have configured the On Premises Log Parser (OPLP) to upload those events to Netskope.  To add some additional user information to Netskope, you configure directory importer to use the mail (email) field to correlate when uploading all of your additional user attributes.  But what happens when you have data from a different source, such as Forward or Reverse Proxy data, and the application is not using the email field as the user identity? (Perhaps it uses the employee ID from your HR system.) In this scenario, the events generated from that source are not going to contain any of the additional user information that was pulled from AD via the directory importer, because the user keys do not correlate with each other.  That’s where the REST API comes in.  As we mentioned in part 1, you can have duplicate rows in the CSV file, one for each possible key that can be matched against, which allows the uploaded attribute data to correlate all events, even when they are using different keys. Furthermore, with the REST API based solution, you can pull user information from sources besides AD (e.g. an HR system) and include it in your file to be uploaded. Additional User Attribute Script Netskope provides a bash script that can be used to upload a CSV or TXT file to your tenant.  The script works as designed, but when trying to implement this as part of a solution a few issues need to be considered. It is likely that you will have already provisioned a Windows instance for the Directory importer. But since the Netskope provided script is a bash shell script, you need to provision a Linux instance in order to run the script. This is extra effort and extra cost. You will probably need to work with your Active Directory team to obtain an export from AD, format it, FTP it over to the Linux instance, and then run the shell script to upload it to Netskope. Parts of this can be automated, but as a solution it is cumbersome and potentially fragile. In an effort to simplify the solution, we have created a script (attached below) that performs the same functionality as Netskope’s bash script but performs it in Windows PowerShell. This means that you can run it on the same Windows instance as the Directory Importer, where it will extract the data from AD, build a correctly formatted CSV file, and upload it to your Netskope Tenant.  Let’s go through the various parts. exportADAttributesToCSV This function is responsible for querying AD and creating the corresponding CSV file in the appropriate format in preparation for its upload to the Netskope tenant.  There are lines in here that you must configure and they are as follows: Properties to pull from AD: change the values in the parentheses to your choosing. You should limit this to 15 attributes as that is Netskope’s current limit for custom attributes Names of the custom attributes to be uploaded: The first sw.WriteLine seen below is where the header row is written into the CSV file.  These will be the names displayed within SkopeIt.  Also, keep in mind the first column will be used as the key to correlate on (i.e. – mail). ManagerDisplayName function: This block will populate manager display name with an empty string if empty or null and if not, will either query AD for each user to get their displayName (heavy performance impact), or attempt to parse the manager’s name out of the canonical name. This can be removed if managerName is not one of the attributes pulled Adding records to the CSV: This section is where the actual records are added to the CSV file under the header row.  This will need to be altered if you change the attributes in the header to match in order and count.  If you have applications that use different keys, this is the place to add additional $sw.Writeline statements to add more than one record per user. jsonValue This function is responsible for parsing the responses from Netskope, trimming additional whitespace and returning the value at a given spot in the JSON body. getToken GetToken is responsible for using the NSUI and NSRestToken variables and calling the Netskope tenant to get a token for subsequent REST requests.  This token will be used when uploading the custom attributes to the tenant. uploadFile This appropriately named function is responsible for the chunking (breaking into smaller pieces to upload) the CSV file and uploading it to the Netskope tenant.  This is primarily a port of Netskope’s script over to PowerShell. getManagerName GetManagerName is responsible for attempting to parse the manager’s full name out of the manager’s canonical name and will require some alteration depending on the way the canonical name is formatted within AD.  Currently, it is set for ‘CN=LastName\, FirstName MiddleName,OU=global,DC=mydomain,DC=com’.  This function is an effort to reduce the number of requests made to AD and increase the scripts performance as it will not have to wait for multiple responses from AD.  If managerName is not an attribute you intend to upload, or you elect to go the route of querying AD for the manager’s displayName, you can do that within the exportADAttributesToCSV function. Main Block The main block is responsible for allowing insecure HTTPS connections (the Netskope script also does this, but PowerShell has an issue with it), some timing logic for the logs and calling the various functions appropriately. Configuring Windows for Additional Attribute PS Script The script is a modified version of Netskope’s additional attribute shell script that is designed to run in Microsoft PowerShell. It requires the use of PowerShell v4.0 or higher, enabling the AD module for PowerShell in Roles and Features, and installing curl as PowerShell does not have native functionality to make multipart/form-data REST requests. Installing AD Module for PowerShell To install the AD Module for PowerShell, you must log on as an admin and follow these steps: Open Server Manager and select ‘Manage’ in the upper right corner and then ‘Add Roles and Features’ from the drop-down Select ‘Role-based or feature-based installation’ and hit next Select local server and hit next Jump to ‘Features’ in the left pane and scroll down to ‘Remote Server Administration Tools’ and expand Expand ‘Role Administration Tools’ and ‘AD DS and AD LDS Tools’ Select ‘Active Directory module for Windows PowerShell’ and hit next Confirm the installation and complete Installing Curl To install curl, you must log on as an admin and follow these steps: Download the latest curl installer for your windows environment from Install curl and accept all defaults Locate where curl executable is installed (likely C:\Program Files\cURL\bin if all defaults selected) and save this for the script variable configuration Creating Encrypted Password File To keep the AD user password secure, the following is a process to create a secure string and save it in an encrypted file. Since this process uses the Windows Data Protection API (DPAPI), you must do the following as the user that will be used to run the script. Failure to do so will result in the inability of the script to decrypt the password. Log into server as user that will be running the script Open powershell window and type “(Get-Credential).Password | ConvertFrom-SecureString | Out-File “ where is where you would like the encrypted password file to be stored. e.g. – (Get-Credential).Password | ConvertFrom-SecureString | Out-File “C:\PSScripts\ExportADUsers\SecurePW.txt” Save for script configuration Configuring Script Variables The following variables should be configured in the script to match your environment: DEBUG – toggle to increase logging when script issues are encountered. NSUI – domain name of Netskope tenant where attributes are to be uploaded NSRestToken – token used to make REST calls to Netskope tenant. If this value is changed on the tenant, then it must be updated in the script. maxChunkSize – if the AD export file is larger than this value, the file will be divided into chunks of this size and one chunk smaller or equally sized. The chunks will then be uploaded to the Netskope tenant in multiple parts. Recommended size is 5MB. path – path where scripts and files exist. Must end in /*.* csvFile – path and name of csv file (e.g. – “$path\ALLADUsers_$logDate.csv”) logFile – path and name of log file (e.g. – “$path\UploadCSVtoNS.log”) ADServer – AD server name (e.g. – ””) searchBase – lowest level in AD and where queries are performed (e.g. –  “OU=Global,DC=mydomain,DC=com”) user – user used to query AD. Must have read access to AD and be the same user that created the encrypted password file pwFile – location of the AD user encrypted password file. from the previous section Scheduling Script via Task Scheduler To keep up with an ever-changing global directory, this script should be automated to run to keep the user data on the Netskope tenant up to date. It is recommend to run daily using Windows Task Scheduler. If this needs to be changed: Log into the server as an admin Open Task Scheduler under Administrative Tools Browse under Task Scheduler Library – Netskope to find ‘Run AD Upload Script’ Right-click and select properties Under the tabs, these options should be set General When running this task, use the account that has access to AD and that you used to create the password file. Run whether user is logged on or not – enabled Run with highest privileges – enabled Triggers – One trigger should exist to start the script at a given time and it should be scheduled to terminate it if it goes over an allotted amount of time. Begin the task – On a schedule Set timing to preferred period (e.g. – Daily @ 0800 GMT) Stop task if runs longer than – should be set to less than the frequency your script is set to run. Enabled – enabled Actions – Should be set to start a program Program/script – path to PowerShell executable (e.g. – C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe) Add arguments – need to add script to execute once PowerShell is opened using ‘File’ argument and absolute path to script (e.g. -File “C:\PSScripts\ExportADUsers\NS_UserAttr_Upload_v1_0.ps1”) Conditions – None Settings Allow task to be run on demand – enabled Stop the task if runs longer than – enabled If the running task does not end when requested, force it to stop – enabled If task is already running, then the following rule applies – Do not start a new instance Maintenance Manual periodic cleanup of log files and csv files will need to be performed in the script directory so the server does not run out of storage space. Our recommendation is to clean files up at least once a month. Alternatively, you could implement those capabilities in your version of the script, or have a separately scheduled Powershell script to perform cleanup. To access the script, please click here. Paul Ilechko | Senior Security Architect Andrew Hejnas  | Cloud Security Specialist & Solutions Architect

  • A Home for CASB (Cloud Access Security Broker)

    Over the past 18 months, I’ve been working on CASB in some form or another including: Educational architectural and technical videos Request for Proposal (RFP) assistance Pre-sales presentations and demos Proof of Concepts (POCs) Implementation Operations build-out and transition I’ve discovered some interesting things working with vendors, clients, and our own security technical staff here at Cedrus. One of them is about the ownership model. There is not a 1:1 map when you compare CASB solution features to the structures of organizations that are deploying them. There seems to be a lack of organizational placement, a permanent home when it comes to CASB. This extends both to technology and business process ownership. Most CASB solutions are a natural evolution out of the network layer of technology and hence so did many of the key players at CASB vendors. These folks are experts in networks, firewalls, proxies, Intrusion Detection Systems (IDS)/Intrusion Prevention Systems (IPS), Security Information and Event Management (SIEM), etc. However, many of the features being offered by CASB extend into areas that don’t typically overlap with the responsibilities of the teams that run these areas of the Security Operations Center (SOC). These include things like Identity and Access Management (IAM), Data Loss Prevention (DLP), Encryption, Application Programming Interface (API) integration, and Malware prevention. Working on technical integrations with CASB there is a need to bridge at least four groups that are often separate in enterprises. Networks/Firewalls/Proxies Active Directory Admins Identity and Access Management (IAM) Team(s) Information/Data Protection And Public Key Infrastructure (PKI) / Encryption if they’re separate from one of the other teams That’s only the technical part. From an operational perspective, most of the work CASBs are doing are directly related to people, applications, and data. For instance: Encrypt Protected Health Information (PHI) when it gets stored in Google Scan all documents in the corporate OneDrive to find and move Personally Identifiable Information (PII) Prevent people from uploading confidential documents as attachments on LinkedIn This brings up the question: What is the best group for management of CASB? All of this means that we need people constructing and approving policy that have an understanding of what’s important to the business, what regulatory mandates are instructing the organization to do, and what makes a “good” cloud app vendor vs. a “risky” one. A strong grasp of change control process must be realized and followed. Like SIEM, false positive alert evolution has to be done by this team within the CASB tool in order to get useful alerting that can be used to take concrete action. We also need these folks to be able to understand and/or work with IAM Federated Single Sign-On (SSO) configurations and redirects, PKI certificates, and DLP policies. Finally, this group has to be able to engage the business constructively, to help them transition from risky to sanctioned apps, and educate personnel on risky actions. With CASB being so new, many organizations only have a small portion of functionality deployed, such as the application discovery features that can assist organizations in resolving the ever-expanding Shadow IT. Discovery functionality can be easily managed in an existing team as a secondary responsibility. This person or team can produce reports that can be reviewed and action can be taken out of band. A home for CASB As CASB solutions get integrated with full enterprise security systems and processes this won’t be enough. At minimum, a Center of Excellence (COE) will have to be established for CASB. Long term, I believe a business service is needed to effectively leverage the solution for maximum risk reduction with minimum business disruption. I would love to hear other views on this as well, so please comment and share your insight! Kyle Watson Partner, Information Security at Cedrus Digital

  • The Latest Trend in FinTech

    With how quickly technology is advancing in the modern world, innovations of all shapes and sizes are happening every day. In addition to technology itself getting better, these technologies are having more and more impact on various different industries, including banking. For decades, the banking industry hasn’t had a ton in terms of innovation. Sure, it’s nice to be able to use our phones to send and receive money and such, but true innovation has been lacking in the space for a while. However, the emergence of FinTech has given the financial and banking industry the innovation it so desperately needed. FinTech hopes to compete with and improve upon the traditional financial services methods we have had in place for so many years. Despite being a decently new wave of technology, FinTech is a huge industry with many companies and projects popping up to solve a variety of different issues with financial services. In addition to more companies and individuals helping to innovate in the space, consumers are also responding in a positive manner to FinTech. According to an EY survey on FinTech adoption, 33% of those in surveyed markets use at least two different FinTech services and 84% of people are aware of the various FinTech services out there. While everyone is taking part in this FinTech revolution, there is no doubt that millennials have had a big part of this explosion of growth. Millennials are unsurprisingly the people that use FinTech services more than anyone else. Nearly 50% of 25-34 year olds are using FinTech frequently, and do not be shocked to see that number continue to climb in 2018. So what makes millennials more likely to adopt consumer end FinTech solutions? Experts believe it’s because they don’t have strong or established relationships with banks and have been using technology most of their teenage/adult lives, and are comfortable with the idea of non-traditional types of financial providers. As the year comes to an end, we thought we would take some time to look at trends such as this and a few more that we’ve seen emerging from the FinTech space. Financial Firms and Institutions Are Dipping Their Toes into FinTech Let’s face it, eventually; everyone succumbs to technology that makes their lives easier. There was a time where computers and cell phones were only for techies and now everyone seemingly has at least one of each. Times are changing and large financial institutions aren’t oblivious to that fact. As a result, they are beginning to provide FinTech solutions to their clients so that their clients don’t go elsewhere. This will continue going into 2018 and beyond. The List of Benefits for Blockchain is Expanding Just like all types of technology, financial technology is always growing and expanding. There were early pioneers that showed promise and have become household names (such as Mint and Bankrate), but now, there are the next generation of technologies that can go far beyond what the pioneers could. This is partly thanks to things like blockchain technology, as it allows for more efficient, fast, and secure transactions than traditional methods. Some of the Biggest Companies Are Becoming Involved If some of the biggest companies on the planet are getting into FinTech, that is usually a pretty good indicator that it is on the rise. Companies such as Apple, Google, Facebook and Amazon have all developed different apps or platforms that make financial services easier, cheaper, faster and more streamlined in nearly every way. Customer Service and Ease-of-Use Are Big Deals How easy or straightforward a platform or system is to use will correlate directly to how many people use it and the experiences they have. If a platform is great, but has a ton of friction when it comes to customer service, it will not be successful. Because of this, many FinTech companies and start-ups need to ensure that customer experience and service is considered very early. As the FinTech industry matures, there will be more and more competition, so your user experience and customer service will be a major key to your success. Another one of the other biggest changes and trends in the financial world (in regards to technology), is the emergence of Robo-Advisor. Robo-Advisor are essentially a class of financial advisors that provide advice or recommendations via complex mathematical algorithms and software. These require very minimal human intervention and are thus quicker, and more efficient both in terms of cost and time. The idea of Robo-Advisor bring services to a wider range of individuals and is a breakthrough in the financial industry. Here at Cedrus, we are especially interested in this last trend as we recently assisted a very high profile client by building them a Robo-Advisor of their own. The Robo-Advisor was created to be an aid to customer investments and retirement planning. It caters mainly to younger investors and is exclusively available online. With the help of algorithms and a questionnaire, customers are easily and quickly able to open an account. We’re interested to hear from you on what new innovations you are most looking forward to in 2018. Let us know in the comments section below!

  • Using Active Directory with Netskope (Part 1)

    It’s likely that most companies who are planning large-scale Cloud Access Security Broker (CASB) projects are users of Active Directory (AD). This paper will discuss how to get the most out of your Netskope implementation by using the AD integration features appropriately. In a future Part 2, we dig deeper into the implementation of certain of these features and provide some helpful sample code. This document is not intended to replace the existing Netskope documentation, and will not cover the implementation details for the Netskope tools, as that is already well defined. Rather, we will discuss which tools solve which use cases, and why you would choose to use them. Active Directory Use Cases Netskope provides a set of tools called the NS Adapters, as well as a REST API, and we will look at each of these in some detail. But first, we are going to look at three use cases that show why extracting AD data is so important. Use Case 1: Enriching log data to support Discovery and Remediation A prime use case for almost all CASB users is SaaS application discovery and remediation. The driver for discovery is data that comes two sources: Data from an organization’s web proxy or firewall logs. This data is collected by an on-premise component, parsed, and send to the Netskope tenant instance. Data that is captured in flight using an inline proxying model, either using Forward or Reverse Proxy models. In both cases, the data it is made available in the Netskope dashboard, providing a range of analytics, as well as the ability to perform deep dives into the data using various reporting and searching tools. It can also be extracted to a .csv file for more ad hoc analysis. To use this data successfully, the Netskope customer will need a governance process that will depend to a large degree on the structure of the company. For example, in some companies, one small team might be authorized and mandated to analyze all data and perform the necessary activities, such as blocking access to applications, or following up with employees who consistently use non-sanctioned tools. In other companies, this might be a delegated task, where a CASB operations team will need to send reports out to business area or geographic region based teams who will deal with the remediation aspects based on their own policies. In either case, for this to be successful the tools need to be available for the operational teams to perform a set of activities that will likely include: Identifying the riskiest applications (application triage processing) Identifying the users of the current application under remediation Working with appropriate business units to address issues Making policy changes based to address the problem In many cases, the raw log data from a user’s proxy or firewall will not contain sufficient information to perform these steps in an efficient way. For example, the process might need to know the user’s name, their email address, who their manager is, their location and department, etc. This data is unlikely to be in the raw log data in a complete form. There may be a unique key present that identifies the employee, but the other data would need to be looked up, either from the AD or from an HR application. In some cases, the only identifying information from the logs might be the source IP address. Using tools that Netskope provides, the necessary additional data attributes can be added to the log data in the Netskope tenant, so that it is all available in one place for reporting and analysis purposes. Similarly, for inline data, Netskope may also be lacking the complete set of user attributes that are needed. Again, the AD tools can be used to enrich the data. Use Case 2: Creating policies at a more granular level Netskope policies can be created to be global, but in many cases, there will be a need to define policies that have a smaller scope. Importing the AD information into the Netskope tenant makes it possible to define policies that apply to an OU, a group, or even an individual user. Some examples of why this might be needed: The legal team has more severe restrictions on how they use certain types of information in correspondence and documents Certain regulatory aspects differ by country, so for a multinational organization there may be a need to define regional policies An application that is unsanctioned to the wider population might be made available to one small team who needs to use it to integrate with the workflow of a business partner. A policy needs to be set up to cover the users in that single department. Importing AD data into Netskope allows for the creation of policies at these more selective levels. Use Case 3: Installing the Netskope Client on user devices AD data can be used to allow administrators to push the Netskope client out to users, which can then ensure that all selected network traffic is proxied by Netskope. This relies on a complete list of users being uploaded from AD into the Netskope tenant. Netskope AD Tools Now that we had looked at the use cases let’s look more closely at the tools that Netskope provides. The Netskope Adapters can be downloaded from your Netskope tenant at the Directory Tools page under settings.  There are three tools available, and we are going to discuss two of them, the Directory Importer and the AD Connector. (The third tool in the package is the DNS Connector, which is not relevant to this discussion). As mentioned previously, Netskope also provides a REST API, and one of the key features of that API is the ability to upload additional AD attributes. We will look at each of these capabilities in turn. Netskope Directory Importer The Directory Importer runs as a Windows service. It needs to be able to connect to a Domain Controller in the AD, as it periodically makes calls to fetch user and group information, which it then posts to the Netskope tenant. Once the configuration is working, you will see the list of users and groups in the tenant under Settings>Active Platform. This data can be used to fulfill use cases 2 and 3. You will see, once this data is uploaded, that you can now set up policies at the user, group and/or OU level, depending on what data you have selected to upload. You can also make use of the list of users to send invitations to install the manually Netskope client, or to integrate with tools such as SCCM or JAMF for automated installations. Note, however, that the Directory Importer by itself does not resolve use case 1. Data that is uploaded to the tenant via the Importer does not enrich log data as seen in SkopeIT or Reporting. In order to achieve that goal, one of the other tools will be necessary. Netskope AD Connector The Netskope AD connector also runs as a Windows service. It retrieves user login events from a configured set of Domain Controllers and passes the IP to username mappings to any Netskope Secure Forwarders or On-Premise Log Parsers that have been configured locally. In addition, custom attributes can be selected from the Active Directory and passed to the other components. However, there are some limitations here. The IP address that is in the log data might not match the IP address that the user was logged in from, depending on how the local network has been configured. The AD Connector only allows for a maximum of five extended attributes, which may not be sufficient to meet the business need for discovery and remediation processing. The AD Connector can be used to enrich data that traverses an OPLP (log data) or Secure Forwarder (inline forward proxy). However, it does not address other proxying options, such as the Netskope Agent or the SAML Reverse Proxy. Some customers may be unwilling to have Netskope read their security logs. The data in AD might not be the most accurate data. Some users might prefer to take data from their HR system, or some other Identity and Access Management (IAM) solution. In the situation where the AD Connector is not workable or does not provide accurate or sufficient information to meet the business need, the final option is to use the REST API to upload additional attributes to enrich the data in the tenant. Netskope REST API The Netskope REST API provides a variety of functions, one of which is to upload additional user attributes to the Netskope tenant that can be used to enrich the data in SkopeIT events. This feature is documented on the Netskope Support site at: The REST API allows you to upload a CSV file of data. This data can be obtained from Active Directory, or from some other source the provides more accurate data, such as an HR System. The document listed above provides details as to how the CSV file should be structured. One key aspect of this is that the first column must contain the user identity key that identifies the data in Netskope. If that data comes from multiple sources, such as Web Proxy logs and the Netskope Secure Forwarder, it is possible the different identifiers are being used for the different sources. In this case, it will be necessary to have duplicate rows in the CSV file, one for each possible key that can be matched against. Netskope provides a Bash script that can be used to upload the file to the tenant. This script will split the upload into 5MB chunks if the data is too large. Note that the script assumes the existence of the CSV file. It does not extract the data from its source, nor does it manage job scheduling. You will need to implement those aspects of the solution. In part 2 of this series, we will show you how to create a Powershell script to extract the necessary attributes from Active Directory, as well as perform the upload to the Tenant (to avoid needing to move the file to a Linux machine), and to handle scheduling. To access the second part of the series, please click here. Paul Ilechko | Senior Security Architect Andrew Hejnas  | Cloud Security Specialist & Solutions Architect

  • NYDFS 23 NYCRR 500 – Why CASB and IAM are key to NYDFS compliance

    (Final post in a three-part series) Three weeks ago we started a three-part series on the adoption of the NY Department of Financial Services (NYDFS) 23 NY Codes, Rules, and Regulations (NYCRR) Part 500. We discussed key steps businesses need to consider and the challenges they’ll face on the road to compliance. To view the first two posts click here. In today’s post, we’re going to showcase the role of Cloud Access Security Broker (CASB) and Identity and Access Management (IAM) – how they protect NPI (Non-Public Information) and support NYDFS compliance. CASB is a key security technology for NYDFS compliance CASB provides critical features necessary in the control strategy for cloud applications: Discover what cloud applications are in use as well as where specific data is going in cloud applications, such as PII, PHI, or NPI Invoke actions such as alerting the user or blocking a specific app or action, like upload or download, based upon unusual behavior through user behavior analytics Detect data compromises and anomalies and take action while informing other security systems like Security Information and Event Management (SIEM) for event correlation and forensics Provide vendor risk analysis and ranking including important items such as recent breaches and incidents, infrastructure used to serve the application, and the vendor’s policies around data ownership and destruction Control access over critical cloud apps and data using the context of device, data, location, or other behavioral risk information Monitor authorized users to track their application use Want to find out more? View our on-demand webinar “The Road to CASB: Compliance Challenges & Key Business Requirements” and download our Road to CASB: Key Business Requirements 2.0 Whitepaper, designed to provide you with requirements that you can use as input consideration for your CASB initiative. Have more questions? Contact us to find out how we can help with your security and compliance needs. Kyle Watson Partner, Information Security at Cedrus Digital

  • NYDFS 23 NYCRR 500 – How is compliance measured?

    (Second in a Three-Part Series of Blog Posts) Last week we started a three part series on the implications of NY Department of Financial Services (NYDFS) 23 NY Codes, Rules, and Regulations (NYCRR) Part 500, a new set of regulations from the NYDFS that places new cybersecurity requirements on all covered financial institutions (a.k.a entities). To check out that post, click here. In today’s post we will discuss the compliance measurement process, and the Risk Assessment. A recent survey by the Ponemon Institute reports that 60 percent of respondents (who primarily work in their organizations’ IT, IT security and compliance functions) believe this regulation will be more difficult to implement than GLBA, HIPAA, PCI DSS and SOX. What is unique about NYDFS NYCRR Part 500 is that it obligates entities to comply with more specific and enforceable rules that they currently face. It differs from existing guidance, frameworks, and revelations in several important ways: Broad definition of protected information Broad oversight of third parties Timely destruction of NPI (nonpublic information) Prompt notification of cybersecurity event (72 hours) Maintaining unaltered audit trails and transactions records Annual certification (first submission due on February 15, 2018) As an NYDFS covered entity, an organization must certify that they have implemented the controls as outlined in the requirements of NYCRR Part 500.  In order to certify, the Board of Directors or Senior Officers must have evidence that appropriate governance, processes, and controls in place.  This evidence is provided through the Risk Assessment. There are 9 major components of the NYDFS regulation that should drive an entity’s Risk Assessment: Program Policies Training Third-party Risk Management Vulnerability & Penetration Testing Logging and Monitoring Access Security Multi-factor Authentication Encryption It is important to note that the Risk Assessment must be conducted periodically, updated as necessary, and conducted in accordance with written policies and procedures so that it’s a defined and auditable process.  Finally, it must be well documented. Meeting compliance will be a challenge for some, even though financial services companies have expected the new cybersecurity regulation for some time. Some of the challenges that we foresee in achieving NYDFS compliance are: Keeping senior management and key stakeholders involved in the planning and reporting process Running regular risk assessments, noting deficiencies from each assessment, and adjusting as necessary Validating that within your technology line-up, you are covered. Are key technologies such as encryption and multifactor authentication in place? Reporting within 72 hours. As you review your incident process, assess whether you can respond to the reporting requirements for cybersecurity events. In addition to protecting customer data and fortifying the information systems of financial entities, another major attribute of NYDFS 23 NYCRR Part 500 is its widening the net of regulated data protection. NYDFS is driving organizations to properly secure sensitive Non Public Information, known as NPI. Even though NPI classification is not new (GLBA was one of the first regulations to introduce personal data security data requirements for NPI), the NYDFS regulation has a more prescriptive approach than others – it requires entities to implement policies, procedures, and technologies to comply. NPI acts as an umbrella over PII (Personally Identifiable Information) and PHI (Protected Health Information). All three data types have their nuances though, so even if you secure your PII and PHI, it doesn’t mean that your NPI is 100% secure and that you’re in compliance.  Take some time to evaluate NPI in your organization – see section 500.01.g for the NYDFS definition of NPI. So, what steps can you take today that will assist your organization in being compliant with NYDFS through the proper protection of NPI? Join us next week for the third and final post, where we discuss why CASB and IAM are two key tech components that can help in your overall compliance strategy for Part 500, and ultimately improve your ability to protect sensitive data and avoid a breach. Want to find out more? View our on-demand webinar “The Road to CASB: Compliance Challenges & Key Business Requirements” and download our Road to CASB: Key Business Requirements 2.0 Whitepaper, designed to provide you with requirements that you can use as input consideration for your CASB initiative. Have more questions? Contact us to find out how we can help with your security and compliance needs. In the third part of the series, we’re going to showcase the role of Cloud Access Security Broker (CASB) and Identity and Access Management (IAM) – how they protect NPI (Non-Public Information) and support NYDFS compliance. Kyle Watson Partner, Information Security at Cedrus Digital

  • NYDFS 23 NYCRR 500 – 5 Key Things Financial Companies Must Do

    (First in a Three-Part Series of Blog Posts) Wednesday, September 27th marked the end of the initial 30 day period for filing notices of exemptions for the New York Department of Financial Services’ (NYDFS) Cybersecurity Regulation 23 NY Codes, Rules, and Regulations (NYCRR) Part 500. For those of you in organizations subject to NYDFS oversight, you are probably aware of 23 NYCRR 500, a new set of cybersecurity requirements that went into effect this past March for financial services companies operating in New York. Its purpose is to address the heightened risk of cyberattacks by nation-states, terrorist organizations and independent criminal actors. So who does NYDFS NYCRR Part 500 apply to? If your company operates in New York, the first question you should ask is: Does my company meet the definition of a Covered Entity? According to the DFS website the following entities are subject to compliance: Licensed lenders State-chartered banks Trust companies Service contract providers Private bankers Mortgage companies Insurance companies doing business in New York Non-U.S. banks licensed to operate in New York As the year comes to an end it is extremely important that your organization is ready to comply and file the annual DFS Certification of Compliance, which is due on February 15, 2018. In Financial Services, you should already have a set of policies, procedures, standards and guidelines based on a Commons Security Framework (ISO, COBIT, etc.) that allow you to perform risk assessments and comply to regulatory mandates. Policies drive the necessary processes and procedures that govern your day to day operations, enabling your business to be secure and compliant.  You should be reinforcing this with all types of people that have access to your systems and data through awareness training during onboarding and on a periodic basis. There has been an increasing focus on compliance at the data level of protection. Under NYDFS, this data is classified as Non-Public Information. It is necessary for organizations to have data protection strategies in place to protect employees, partners, and customers. Increase of threats and breaches have ignited legislative bodies to subsequently issue regulations to ensure that companies are behaving in a way that mitigates risk. Many new regulations have come into play in recent years. Prior to NYDFS 23 NYCRR 500,  there was the EU General Data Protection Regulation (EU-GDPR) in 2016, and Service Org Control (SOC) in 2011 (formerly SSAE16 in 2010 and SAS70 in 1992).  A strong risk-based approach to data protection means that your company should have a short distance to get to compliance, but each new regulatory mandate introduces changes that must be considered in data protection, visibility, and reporting to the executive level. NYDFS started in March 2017 and there was a transitional period that ended in August 2017, with the deadline for filing an extension ending just last month, September 2017. The timeline starts getting more specific as the new year rolls out with the first annual certification due on February 18, 2018. Following the first 2018 deadline is a timeline for implementation of specific components of the regulatory mandate required controls. The NYDFS 23 NYCRR 500 Timeline There are 5 key things that you need to do immediately, if you have not done so: Appoint a Chief Information Security Officer (CISO) with specific responsibilities Ensure that senior management files an annual certification confirming compliance with the NYCRR Part 500 regulations Conduct regular assessments, including penetration testing, vulnerability assessments, and risk assessments Deploy key technologies including encryption, multi-factor authentication, and others Ensure your processes allow you to report to NYDFS within 72 hours any cybersecurity event “that has a reasonable likelihood of materially affecting the normal operation of the entity or that affects Nonpublic Information.” What makes this new set of regulations unique is that it requires companies to comply with more specific, enforceable rules than they currently use. It also differs from existing guidance, frameworks, and regulations in that it has a broad definition of protected information, an increased oversight of third parties, calls for timely destruction of NPI (Non-Public Information) and prompt notification of a cybersecurity event (72 hours). Entities are also mandated to maintain unaltered audit trails and transaction records and submit annual certification. In our next post we will discuss the 9 major components of the NYDFS regulation that should drive your Risk Assessment. If you would like to be kept up to date on cloud security issues, please click here to subscribe to our Cloud Security eNews.  In the second part of the series, we discuss the compliance measurement process and the risk assessment. Kyle Watson Partner, Information Security at Cedrus Digital

  • Road to CASB: Key Business Requirements 2.0

    The Cloud Access Security Broker (CASB) market is mainstream. Venture backed startups are being acquired and Big Tech firms are positioning themselves to be better aligned with this new tech disruption. Enterprises are moving from talk to action in implementing solutions that better protect their data in the new frontier of Cloud. Since CASB solutions are new, many organizations are seeking guidance on how to properly evaluate tools and vendors in light of their compliance and risk mitigation requirements. The goal of this paper is to provide a kickstart through a working set of requirements for organizations to leverage, and modify as needed in their search for a CASB solution. This set of requirements provides some structure on how CASBs fit in the overall Information Security strategy. This paper is designed to provide key requirements that organizations can use as input consideration for their CASB initiative. Each requirement provides specific features that are important in most organizations, but specific risk mitigation priorities must be analyzed and decided within each organization. For instance, Cedrus has provided examples of integrations such as Security Information and Event Management (SIEM), but each organization’s needs may be more specific about a particular SIEM. Contact us for access! #cloudapplications #cloudsecurity #informationsecurity

  • API Czar, CodeSoju at the official Angular and Node Meetups in New York

    A month ago, I had the pleasure of presenting API Czar & CodeSoju at the official AngularJS and Node.js Meetups, which were founded in 2010 & 2012 in New York City. API Czar is a rapid API development tool that enables teams to generate and deploy best practice based APIs, while CodeSoju is an open source initiative that provides a set of standards, best practices and tools to help developers during all phases of the development lifecycle. In these 2 presentations, I talk about how API Czar & CodeSoju work as well as the importance of the growing community of developers that support them. Both tools spiked a great discussion from the two communities, mainly because they solve everyday challenges they face. API Czar & CodeSoju derived from a key question we ask ourselves after completing any project: How can we maximize our team’s efficiency on large scale projects? For more information, please visit and

  • Cloud App Encryption and CASB

    Many organizations are implementing Cloud Access Security Broker (CASB) technology to protect critical corporate data stored within cloud apps. Amongst many other preventative and detective controls, a key feature of CASBs is the ability to encrypt data stored within cloud apps. At the highest level, the concept is quite simple – data flowing out of the organization is encrypted, as it is stored in the cloud. However, in practice there are nuances in the configuration options that may have impact on how you implement encryption in the cloud. This article outlines important architectural decisions to be made prior to the implementation of encryption solutions through CASB. Gateway Delivered, Bring Your Own Key (BYOK), or Vendor Encryption There are three generic methods in cloud-based encryption. I. Gateway delivered encryption – In this model, the CASB may integrate with your organization’s existing key management solution through Key Management Interoperability Protocol (KMIP) or provide a cloud-based key management solution. In either case, the keys used to encrypt your data never leave your CASB. Data is encrypted before it leaves your environment and is stored at the vendor You control the keys The vendor retains no capability to access your data II. BYOK encryption – In this model, the keys are generated and managed by your organization, and then are supplied to the vendor. BYOK allows you to manage the lifecycle of the keys, which are then shared with the vendor. This includes revoking and rotating keys. The keys are then provided to and utilized by the vendor to decrypt requested data for use by authorized users. CASB can be involved as a broker of the keys to simplify, centralize, and streamline the process of key management by allowing you to perform this administration directly in the CASB User Interface (UI). This also may be done using KMIP with your existing key management solution. Alternatively, without a CASB you may still enjoy the benefits of encryption with your own keys, but administration would be manual on an app-by-app basis. Data is encrypted at the vendor You can control the keys The vendor retains the capability to access your data III. Vendor provided encryption – In this model, the vendor provides keys and key management. The administration may be provided through user interfaces provided by the vendor. The CASB is not involved. Data is encrypted at the vendor The vendor controls the keys The vendor retains the capability to access your data Important Considerations There is not a “best” way to manage encryption for cloud apps. One important consideration for you to make the best decisions for your company begins with your motivation. Is your primary concern compliance, mitigating risk of vendor compromise, protecting data from being disclosed in blind subpoenas, all three? Compliance – Encryption for compliance can be met easily by any of the three approaches, and is simplest with vendor provided encryption. Mitigating risk of vendor compromise – Using encryption to mitigate the risk of vendor compromise implies the need to manage your own key, since your data will not be accessible without the key. Gateway delivered encryption is the approach that can provide the highest level of risk mitigation due to vendor compromise, as your keys never leave your environment. Cyber- attackers stealing your data will not be able to decrypt it without using your key or breaking your encryption. Risk may also be mitigated through BYOK, but agreements must be secured from the vendor to communicate breaches in a timely fashion. Then you must take appropriate revocation actions in your key management process. Protecting data from being disclosed in subpoenas / blind subpoenas – Using encryption to protect data from being disclosed in subpoenas also implies the need to manage your own key. Gateway delivered encryption is the approach that can provide the highest level of risk mitigation from blind subpoena through a completely technical means, as third parties retrieving your data will not be able to decrypt it without your key. Risk may also be mitigated through BYOK, but agreements must be secured from the vendor to communicate third-party requests for your data in a timely fashion. Then you must take appropriate revocation actions in your key management process. Unstructured and Structured Data To further explain these approaches we must break out two very different types of data prevalent in the cloud: Unstructured and structured data. Unstructured data refers to data generated and stored as unique files and is typically served through end user apps, for example, Microsoft Word documents. Structured data refers to data that conforms to a data model and is typically served through relational databases and User Interfaces (UI), for example, Salesforce UI. Structured Data Gateway delivered encryption – Since the CASB sits between your end user and the application, structured data can represent a challenge to usability. From a usability perspective, whenever the application vendor changes field structures, the encryption must be addressed in order to maintain usability. From a security perspective, the app must decrypt and reveal some information in order to allow search, sort, and type-ahead fields to work properly in a cloud app UI. This is known as “Format Preserving”, “Order Preserving”, and “Order Revealing” encryption, which can lower the overall standard. A growing body of research is challenging this method and exposing weaknesses that may lead to compromise. For example, if you were to type “JO” in a field and it revealed all of the persons with names beginning with JO, this data has to be retrieved decrypted to support the UI. BYOK encryption – since you supply the keys to the vendor, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. From a security perspective, BYOK does not suffer from the same risk of compromise in “reveal”, as exists in gateway delivered encryption. Vendor provided encryption – Since the vendor owns the keys, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. From a security perspective, vendor provided encryption does not suffer from the same risk of compromise in “reveal”, as exists in gateway delivered encryption. Unstructured Data Gateway delivered encryption – Risk of usability problems is low on unstructured data in cloud storage. However, an important consideration is key rotation. Data encrypted under one set of keys can only be opened with those keys. Keys may need to remain available in archive, for reads, even if they have been retired. BYOK encryption – Since the keys are supplied to the vendor, encryption/decryption occurs within the vendor application architecture as does key rotation and management. Vendor provided encryption – Since the vendor owns the keys, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. Key management processes will be dependent upon the vendor. Industry Direction Most major cloud vendors are moving toward the support of a BYOK model. Some of these include Salesforce, ServiceNow, Box, Amazon Web Services (AWS), and Microsoft Azure to name a few. As more and more vendors are offering this type of capability, at Cedrus we believe that this is the direction of cloud encryption. Opinion Gateway delivered encryption – This is the highest level of security that can be provided when it comes to cloud app encryption, but may have an impact to the business in usability issues, especially when applied to structured data. High-risk apps and data are safest in this configuration and require the most care and feeding. BYOK encryption – This implementation can provide a very high level of security without the impact that comes with gateway encryption. Through integration with a CASB as a broker of keys to centralize this management, this solution provides an excellent balance between protection and usability for high-risk apps and data. Vendor provided encryption – This implementation provides a much higher level of security than not implementing encryption. This solution may be best suited for apps and data of lower criticality or meeting compliance requirements, only. Recommendations As with all security decisions, risk and compliance must be the yardstick in any decision. Since we do not know the industry, application, or risk to your business; this is a generic recommendation. Where possible, always leverage your own keys over vendor provided keys. Remember, a breach into a lower-risk app may provide clues to breach other apps. When provided as an option, the best trade-off between security and usability is BYOK. It is very important to gain agreement from vendors for proactive communication. Where BYOK is not offered, the risks must be weighed carefully between vendor provided and gateway delivered encryption, especially for structured data. When considering a move to gateway encryption, risk analysis of the app and data are critical. The risk of compromise should be clear and present danger. This is because a decision to move to gateway encryption for structured data means a commitment to the management and maintenance at a much higher level than BYOK or vendor provided encryption. This is not a recommendation against taking this course, but advice to consider this path carefully and plan the resources necessary to maintain this type of implementation. In a recent exchange with a customer they articulated the challenge: “We use CASB to provide field level encryption for our Salesforce instance. There are many issues requiring a lot of support and we have plans to move away from it and leverage encryption that is part of the Salesforce platform.” For more educational material on CASB, please see my other posts, which you can find on my profile page: Thank you. Kyle Watson Partner, Information Security at Cedrus Digital

  • Adding value to native cloud application security with CASB

    Many companies are starting to look at Cloud Access Security Broker (CASB) technology as an extra layer of protection for critical corporate data as more and more business processes move to the cloud. Most will start with a discovery phase, which typically involves uploading internet egress logs from firewalls or web proxies to the CASB for examination. This provides a detailed report of all cloud application access, usually sorted by a risk assessment that is specific to the CASB vendor doing the evaluation (all of the major CASB vendors have strong research teams who do the Cloud service risk evaluation for you, so that you don’t have to). This is a great starting point to the CASB world as it provides instant value, enabling the company to get started on thinking about the policy needed to protect themselves in the Cloud world, and also to drive conversations with the business departments using the cloud services, to get an understanding of why they are using them, and if they really need them to get their jobs done. This can drive a lot of useful considerations, such as: Is this Service safe, or is it putting my business/data at risk? If it is creating risk, what should I do about? Can I safely block it, or will it cause an issue with my business users? If my business users need this functionality, are there better options out there that achieve the same goals without the risk? And so on. You get the idea. This discovery, assessment and policy definition phase can take some time, possibly weeks or even months, before you are ready to take the next step into a more active CASB implementation. Let’s quickly summarize the ways in which CASB can be integrated into a more active protection scheme: CASB’s provide API level integration with many of the major SaaS, PaaS and IaaS services, allowing for out-of-band integration that perform functions like retroactive analysis of data stored in the cloud, or near real-time data protection capabilities than can be implemented in either a polling or a callback model. CASB’s typically provide an in-line proxy model of traffic inspection, where either all, or some subset, of your internet traffic can be proxied in real time, and decisions can be made on whether to allow the access to proceed. This can incorporate various Data Loss Prevention (DLP) policies, can check for malware, and can perform contextual access control based around a variety of factors, such as user identity, location, device, time of day, etc. – as well as sophisticated anomaly and threat protection using data analytics, such as unexpected data volumes, non-typical location access, and so on. For users who are leery about using a CASB inline for all traffic, particularly when that traffic is already traversing a complex stack of products (firewall, web proxy, IPS, Advanced Threat Protection …), many CASB vendors also provide a “reverse proxy” model for integration with specific sanctioned applications, allowing for deeper control and analysis that integrates the CASB with the cloud service using SAML redirection at login time. So, let’s assume that you’ve been through Discovery, defined some policies, decided which of your major SaaS applications you want to focus on first, and now you are ready to move forward with the next phase of implementation. Perhaps you want to protect data flowing into SalesForce, or Office 365, or Google Applications, as these are the most-used and most critical business applications for your organization. But now, as you prepare to move forward, you start to get pushback from the owners of those applications within your organization. “We already have security built into the application”, they say. “Why make things more complicated”. Well, that’s a fair question, so for the rest of this paper we’ll look at some of the key features that CASB’s provide over and above the typical platform capabilities. Policy based encryption Many platforms, such as SalesForce with its SalesForce Shield capability, provide the ability to encrypt data. With Shield, for example, this can be at either at the file or field level. However, Shield is configured at the organization level. Most companies that use SalesForce will probably have created multiple SalesForce Orgs. It’s likely that you want to define policy consistently across organizations, and even across multiple applications, such as SalesForce and Office365. A CASB can provide you with the capability to define policy once and apply it many times. You have the option to use the CASB’s own encryption, or in some cases to make use of the CASB’s ability to use API integration to interact with the platform’s own native tools (e.g., some CASB’s are able to call out to SalesForce Shield to perform selective encryption as required by policy.) The CASB can protect your data no matter where in an application it resides: in a document, in a record, or in a communication channel such as Chatter. (The CASB can, of course, provide these capabilities for many applications, we are just using SalesForce here as an example.) Protection against Platform-based leakage f you store data in a cloud service and encrypt it with the platform’s own native mechanisms, then your data is potentially at risk if that provider is hacked. If you use a CASB to encrypt data, using keys that you control, and which are stored in your own keystores (either on premise or in the cloud – most CASB’s allow you to use any KMIP-compliant key management solution) then you are protected against this type of threat. Beyond this simple example of a security breach, there are other related issues. For example, if the platform has a larger ecosystem built on it, such as SalesForce, third party applications may have access to your data. Once you encrypt data using a third party platform you no longer have exclusive control of how that data is accessed, and by whom. Or, what if a government agency, such as the IRS, demands access to your financial data? Or perhaps, as part of a lawsuit, there is a subpoena against some data stored in one of your cloud services? If you don’t hold the keys, your provider may be forced to provide that access without you even being a party to the conversation! Continuous Data Monitoring A CASB can provide real-time or near-real time monitoring of data. It can use API’s to retroactively examine data stored in a cloud provider looking for exceptions to policy, threats such as malware, or anomalies such as potential ransomware encryptions. It can act as a proxy, examining data in flight and taking policy based actions at a granular level. More than just a black or white block/allow action, the CASB can coach the end-user to move to an approved platform, or temporarily quarantine a suspect file until it can be examined. This goes far beyond the simplistic authorization and encryption capabilities built into most SaaS platforms. Threat and Anomaly recognition CASB’s typically provide strong capabilities around threat protection and anomaly recognition. Using advanced data science techniques against a “big data” store of knowledge, they can recognize negligent and/or malicious behavior, compromised accounts, entitlement sprawl and the like. The exact same set of analytics and policies can be applied across a range of service providers, rather than forcing you to attempt it on a piecemeal basis. For example, if you are a SalesForce user, this can be applied to the entire AppExchange ecosystem. The CASB may also be able to find out if you are using redundant AppExchange components, and warn you about ones that are particularly risky. Cross-cloud activity monitoring Because a CASB can be used to protect multiple applications, it can provide a detailed audit trail of user and administrative actions that traverse actions across multiple clouds, and which can be extremely useful in incident evaluation and forensic investigations. The CASB acts as a single point of activity collection, which can then be used as a channel into your SIEM. Rather than attempting to collect and upload logs from a plethora of disparate sources you have a single, centralized, and detailed summary of activity easily available. So to summarize: while many of the major cloud service providers have added interesting and useful security features to their applications, a CASB can add significant additional benefit by streamlining, enhancing and consolidating your security posture across a wide range of applications. ——————————————————————————————————- For more educational material on CASB, please see the series of posts by Kyle Watson, which you can find on his profile page: Paul Ilechko | Senior Security Architect

bottom of page