top of page

SEARCH

53 items found for ""

  • Cedrus Achieves AWS Well-Architected Partner Status

    [January 12, 2021]– Cedrus, a highly sought-after digital transformation solutions provider and AWS Consulting Partner, announced today that it has achieved the AWS Well-Architected Partner status, which recognizes that Cedrus has the expertise to deliver AWS Well-Architected reviews for existing application workloads or new applications based on AWS Well-Architected Framework best practices. “Cedrus is proud to be an AWS Well-Architected Partner, building upon our history of helping AWS customers innovate and modernize in a secure and scalable manner,” said Mike Chadwick, SVP of Business Development and Sales at Cedrus. “The AWS Well-Architected program ensures that our customers’ AWS environments follow best practices and deliver agility, reliability, security, and rapid innovation.” AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Well-Architected Partner Program to help customers identify APN Consulting Partners with deep expertise in optimizing workloads and mitigating risks in their workloads. Leveraging the principles of the AWS Well-Architected Framework, the Cedrus team has been helping customers gain visibility into their AWS environments and develop pragmatic, prioritized action plans. Cedrus boasts deep expertise in helping enterprises with modernization of complex workloads. The AWS Well-Architected Framework has proven to be a critical pillar in supporting growing enterprise adoption of governance and policy as code to drive ongoing security posture and automated compliance with corporate standards. AWS customers can contact Cedrus for a walk-through of a Well Architected Review and how Cedrus can help build a secure AWS plan and roadmap. Email: Matt.Putney@cedrus.digital About Cedrus - Cedrus designs, develops, and implements modern cloud applications that drive digital transformation at global brands. We are a trusted advisor for design thinking, innovation and modernization founded on expertise in cloud security, cloud native application development, cognitive business automation and systems integration. www.cedrusco.com

  • Cedrus launches new service on Professional Services in AWS Marketplace

    [December 3, 2020]– Cedrus, a highly sought-after digital transformation solutions provider and AWS Consulting Partner, announced today that it is participating in the launch of Professional Services in AWS Marketplace. Amazon Web Services (AWS) customers can now find and purchase cloud development, cloud security, advisory and process automation services from Cedrus in AWS Marketplace, a curated digital catalog of software, data, and services that makes it easy to find, test, buy, and deploy software and data products that run on AWS. As a participant in the launch, Cedrus is one of the first AWS Consulting Partners to quote and contract services in AWS Marketplace to help customers design, implement, optimize, secure and manage their software on AWS. With professional services from Cedrus available in AWS Marketplace, customers have a simplified way to purchase and be billed for both software and services in a centralized place. Customers can further streamline their purchase of professional services and software with standard contract terms to simplify and accelerate procurement cycles. https://aws.amazon.com/marketplace/seller-profile?id=64fb680f-8ad7-4e16-8415-5ad32284f2ee . “Cedrus is proud to support professional services in AWS Marketplace,” said Mike Chadwick, SVP of Business Development and Sales. “Our team is dedicated to helping companies find the complete cloud solutions they need to innovate and migrate to the cloud. Now, our customers can access cloud software solutions and our associated services to help them implement and manage their workloads on AWS from one centralized location.” Available upon launch of the Professional Services Marketplace in AWS, is the Cedrus Cloud Architecture and Security Review. The Cedrus Cloud Architecture and Security Review provides in-depth investigation, security risk analysis and review of AWS architectures, workloads, and governance. Using a Cedrus developed methodology and toolset that is more comprehensive than Well-Architected Frameworks and reviews, expert cloud security engineers correlate, and cross validate hundreds of variables with events, policies, practices, and system configurations to determine your at-risk status. This unique approach identifies liabilities, risks, and opportunities to improve your security posture that other assessments miss. AWS customers can contact Cedrus for a walk-through of an actual Cedrus Cloud Architecture and Security Review and compare the gaps their internal assessments may be missing. Email: Matt.Putney@cedrus.digital About Cedrus - Cedrus designs, develops, and implements modern cloud applications that drive digital transformation at global brands. We are a trusted advisor for design thinking, innovation and modernization founded on expertise in cloud security, cloud native application development, cognitive business automation and systems integration. www.cedrusco.com

  • Cleaner Microservice Orchestration With Zeebe+Lambda

    Brian McCann, Software Engineer In this post I’ll talk about Zeebe. Zeebe is a source available workflow engine that helps you define, orchestrate, and monitor business processes across microservices. I’ll show you how to quickly get a full application that leverages Zeebe and Lambda running on AWS. I’m piggybacking off a post by the company’s co-founder in which he showed how you can use Zeebe with Lambda. You may want to start with that one first for a general overview of Zeebe and how it fits in with Serverless, then if you’re interested in quickly getting all the components running on AWS this post will show you how. If you embrace AWS Lambda or are considering adopting it you may see Lambda as an alternative to Kubernetes and prefer the simplicity, reliability, and out of the box scaling that the Serverless way offers. I was motivated to write this for that type of developer. For Zeebe to work we need something running all the time so we can’t be fully “serverless”. My concession is to use ECS to bridge the gap. ECS is a simpler and cheaper alternative to Kubernetes when you’re getting started with containers. If you are a serverless first developer you or your organization might not want to take on the overhead of running and learning k8s if almost all your workloads work fine on Lambda anyway. Zeebe is a super powerful tool that allows you to compose your Lambda functions to build arbitrarily complex stateful applications. Today I’ll show how to get up and running quick with Zeebe and Lambda using ECS. This is what we’ll make Use Case? In case you don’t know about Zeebe I’ll try to describe what it does by describing a use case. A little while ago I was working on a backend that would allow customers to put a down payment on a vehicle then go pick it up at a dealership. The whole process wasn’t too complicated but did involve 5–6 discrete steps involving external API calls. Any of those steps had a number of potential failure paths and based on the outcome of each step you would want to take a different action. On top of that there are different types of errors. There are errors due to conditions in the physical world like a customer putting their address in wrong and there are technical errors like network outages or a bug in the code. You can imagine how the number of scenarios grows exponentially each time you add a step. General purpose programming languages on their own are great at a lot of things, Orchestrating a complicated sequence of steps is not one of them. Using event driven architecture can help us manage emerging complexity by separating business logic into individual microservices whose isolated behavior is simpler to reason about. What we lose is end to end visibility of the business process, so that debugging is a challenge and refactors are scary. This is where a tool like Zeebe, or its predecessor Camunda can help. These tools let us represent our processes as diagrams that are executable. Here is an example: One of the main advantages is that our business logic is now organized into manageable and isolated components. In addition, and in my opinion most importantly, this strategy separates the concern of Orchestration from everything else. This allows us to write functional style microservice components that are easy to read and refactor. All of our orchestration is managed by a dependable and tested framework, and the orchestration logic is cleanly separated as opposed to peppered throughout our microservices (as tends to happen). The boxes with gears are “service tasks” which means they represent pieces of code external to Zeebe that execute. The process engine (Zeebe) is responsible for executing the instructions and will invoke the code we’ve written where specified. In today’s example the service tasks (little boxes with gears ) will be implemented as Lambda functions. There are a lot of other powerful features exposed by BPMN (business process modeling notation), the ones pictured above are just a few. We need a way for Zeebe to trigger the Lambda functions because Lambda functions by design are dormant until you trigger them. We will use the code provided by the zeebe-lambda-worker to create a link between the orchestrator (Zeebe) and the business logic (Lamba). The worker will listen for events on the Zeebe broker, pick up work tasks as they come in, and forward the payload to Lambda. The original post covers how all this works. Here is what I’m adding How to spin up an ECS cluster and run the zeebe-lambda-worker on it How to use an IAM role for authorization instead of AWS credentials Here are the steps involved in this walkthrough: 1. Get set up with Camunda Cloud Sign up for a Free Camunda Cloud trial account Launch a Zeebe instance 2. Use the Serverless Framework to deploy several Lambda functions and an IAM role 3. Get set up on ECS Create an ECS cluster using a setup shell script Deploy the zeebe-lambda-worker to ECS 4. Test that everything works end to end Sign up for Camunda Cloud Camunda and Zeebe are the same company. Zeebe is their newer product and Camunda Cloud is their managed cloud offering which happens to run on the newer Zeebe engine. It’s confusing, I know, probably has something to do with marketing as Camunda is an established and fairly well known brand. Getting set up with a development Zeebe instance is fairly straightforward. This post from the zeebe blog walks you through the steps. The whole thing is a good read but you can skip everything after the create a client part if you just want to get up and running. We’ll need the broker address information and the client information later. Once we’re set up with a free Zeebe instance we can deploy the Serverless framework project. Deploy the serverless project prerequisite: You need to have an AWS account + IAM user set up with admin privileges. This is a great guide if you need help. clone the repo https://github.com/bmccann36/trip-booking-saga-serverless cd into this directory trip-booking-saga-serverless/functions/aws In the serverless.yml file update the region property If you’d like to deploy to a region other than us-east-1. I have made barely any changes to this from the original original forked repository. I removed the http event triggers from the functions since we won’t need them for this. This will make the project deploy and tear down faster since there are no API gateway resources. I also defined an IAM role that will give our zeebe-lambda-worker (which we have yet to deploy) permission to invoke the Lambda functions. The policy portion of the role we’ll give to the ECS lambda-worker Run the command sls deploy -v to deploy the Lambda functions (the -v gives you verbose output). Connecting Lambda + Zeebe with ECS The last step to get our demo application working end to end is to deploy the zeebe-lambda-worker. The source code can be found here. I also forked this from Zeebe and made a small change which was to add support for IAM role authorization. Since our ECS task will assume a role that gives it permission to invoke the Lambda functions it needs to, we do not need to supply AWS credentials to the worker code. I added an if / else statement so that a role will be used if no AWS credentials are supplied. This works because the AWS sdk sources credentials hierarchically. The role referred to in this snippet is the one we created earlier when we deployed the Serverless framework project. If you prefer to use accessKey/secretKey make sure those keys are not the same keys you use as an admin user to work with AWS. You should create a role or user with credentials that allow the least permission possible. In this case we only need permission to invoke a few specific Lambda functions. I have already packaged the zeebe-worker as a Docker image and pushed to a public repo. The ECS setup which we will get to in a moment will pull this image. If you want you can modify the worker source code and/or build the image yourself and publish to your own repository. You’ll need to package it as a .jar first as it is Java code. Prerequisite: ecs-cli installed on your machine. You can get it with aws binary or homebrew If you run into issues at any point look at the AWS walkthrough for reference. I mostly followed the steps outlined there to do this with a few minor modifications. I have chosen to use a launch type of EC2. Fill in some configuration To make it easy to configure your ECS cluster I’ve included some configuration template files, all of which have the word “SAMPLE” in the file name. To use them, cd into zeebe-event-adapter/ in the trip-booking-saga repo. Then copy all the files with the word SAMPLE in front and save them with “SAMPLE” removed from the file name. This will cause them to be ignored by git. I’ve configured the .gitignore to ignore these files so that you or I don’t accidentally commit sensitive credentials to source control. Next, populate the values in each file as outlined below. (file) aws.env AWS_REGION= --- (file) Camunda.env ZEEBE_CLIENT_CLOUD_CLUSTERID= ZEEBE_CLIENT_CLOUD_CLIENTID= ZEEBE_CLIENT_CLOUD_CLIENTSECRET= --- (file) ecs-params.yml ... # modify this line task_role_arn: arn:aws:iam:::role/zeebe-lambda-worker-role ... The values we supply in Camunda.env will be used by the zeebe-lambda-worker to connect to the Camunda Cloud Zeebe instance we created in the first step. Once we’ve filled in the values in each of these files we’re ready to deploy. I’ve combined all the steps into one bash script so everything can be deployed with one command. Just run the setupEcs.sh script and supply your region as the only argument. i.e. bash setupEcs.sh us-east-1 If you run into problems at any step you may want to try running the commands in the script one by one manually, or refer back to the AWS tutorial. # note you may get this warning INFO[0010] (service aws) was unable to place a task because no container instance met all of its requirements. Don’t get impatient and kill the process, this just means your infra isn’t quite ready yet, it should resolve on its own. If you are used to working with Kubernetes the ECS terms will be confusing. An ECS service is not the same as K8s service at all and has nothing to do with networking. Instead in ECS a service just defines that you want to keep a “task” (which is basically a container with some config attached) running. It’s the same idea as a service you’d set up on a Linux server. Test that it works *disclaimer The main purpose of this post is to show how to easily deploy the zeebe-lamba-connector to work with Lambda functions so I won’t cover Zeebe usage much. If you want to learn more about Zeebe check out Bernd Rücker’s posts or try the zeebe quickstart. Probably the simplest way to deploy and start the workflow is with the Zeebe Modeler tool which you can download here . You can also use the command line tool zbctl. Once you have the modeler downloaded, just open the trip-booking.bpmn file (in zeebe/aws directory) in the zeebe modeler. Use the upload and play buttons to upload the workflow to the cluster and start an instance of the process. If the process instance worked successfully end to end you will see the completed instance in the operate console. If we navigate to ECS, select the service we’ve deployed and click on logs, we can see the successful results from the Lambda functions being passed back and forth through the zeebe-lambda-worker. Also if we navigate to the Lambda console, select one of the Lambdas involved in our workflow then select monitoring → view logs in cloudWatch, we can see the logs of the Lambda function itself. Cleaning up ECS resources when you don’t want them anymore take down the service ecs-cli compose service rm — cluster-config zb-cfg remove the whole cluster ecs-cli down — force — cluster-config zb-cfg Closing thoughts I am curious to hear what others think about pairing Zeebe with Lambda. Does the combination make sense? What would you do differently? Also for anyone who uses or has used step functions I am curious to hear how this solution compares. I had not used ECS much before this and I was pleasantly surprised by how easy it was to provision and use. It seems like they’ve added a lot of features since the last time I looked at it to try to keep pace with Kubernetes. I also like how they use a docker-compose file and docker like commands so you can work with your containers in the cloud more or less the same way you would locally. My one big concern and hesitation on getting good at ECS is that its clearly not the industry standard. Practically as engineers it just makes way more sense to use what everyone else is using because thats what everyone already knows. I think Zeebe is a very promising new product, I also think they have their work cut out for them because there are a lot of competitors in this space. Some popular tools that fill the same Niche are Netflix’s Conductor and AWS’s own Step Functions. They face competition not just from competitor frameworks but also homespun orchestration solutions using event buses and other lower level orchestration/choreography tools. I think that a lot of orgs and developers don’t even realize that there is a tool that can help them glue together or orchestrate microservices. In a lot of cases people tend to write this code themselves. In many cases it is a conscious decision because they feel that adding in another Library just creates additional complexity, makes their app harder to understand for new developers, or perhaps even limits what they can do. I totally get that concern as I’ve had these problems myself. For my personal development and collaboration style I think Zeebe makes a lot of sense for a broad range of tech problems. I hear about a lot of people reaching for Kafka to achieve EDA. I have nothing against Kafka and I think it is a technically impressive piece of engineering. An experienced dev can probably get up and running with it in a day but they definitely won’t be using it optimally or even getting benefits from it for a long time because it is so complicated with tons of use case specific config required. Kafka is more low level than Zeebe and therefore applies to a broader range of use cases. However I think that if you are just using Kafka as an event bus to drive a collection of choreographed microservices, Zeebe may offer a simpler and actually more powerful solution. With Kafka things like event replay, retry strategies, and monitoring solutions at a business process level, are capabilities we need to build ourselves. Zeebe is appealing to me because they’ve found a way to generalize these needs. This allows us as developers to simply use their API to endow our apps with these advanced features and ultimately solve a business problem.

  • Automatic Text Summarization: Demystified

    As the automation market is rounding the corner into the mainstream, organizations are looking for new ways to leverage Artificial Intelligence to streamline their day-to-day business processes. The purpose of this blog post is to address common questions we get asked by our customers around Automatic Text Summarization, which is the process of using AI to shorten a text document and create a summary of its major points. What are the different Automatic Text Summarization approaches? When does it make sense to leverage these approaches? Extractive text summarization is the process of isolating sentences or phrases that capture the gist of the original text and constructing a new, shorter text from the isolated parts. Methods for extractive summarization usually entail ranking each sentence of a document according to factors such as keyword frequency, length, total number of keywords, etc. Abstractive text summarization aims to understand the text as a whole and present a summary made of new, generated sentences. Abstractive summarization is a more ‘human-like’ process but it is also more difficult to implement and produce coherent and accurate summaries. Because abstractive methods are based around the AI achieving an understanding of the text and subsequently producing its own version of the text, such methods are hampered by issues with semantic representation, inferencing, and natural language generation. The practice of abstractive summarization is much younger and less developed than that of extractive summarization. Since extractive summarization methods are generally more well-understood, they tend to result in better summaries than abstractive methods. For now, extractive summarizers are both more time-efficient to implement and more reliable than abstractive summarizers, but perhaps in the future when the field has advanced further, abstractive methods will be the standard. In most use cases, we would pursue an extractive method via the TextRank algorithm since it is text-agnostic and does not rely on pre-existing training data. The TextRank algorithm follows the same general steps of most extractive methods. It begins with constructing a feature-based representation of the sentences from the text. Next, the algorithm scores the representations and finally creates a summary from the highest-scoring sentences. TextRank is an unsupervised machine learning algorithm. The steps are described in more detail below. Tokenize the input document to prepare it for analysis. Generate similarity matrix across all of the sentences using a metric like cosine similarity. Rank all of the sentences in the similarity matrix and sort by rank. Sentences would be ranked according to a variety of features like keyword frequency, length, total number of keywords, etc Choose a summary length cutoff and output the summary, which would consist of the highest-ranked sentences. The length cutoff would be set to a certain number of sentences or characters. Does the text type (Rhetorical structure, length, etc.) affect the approach? TextRank is a text-agnostic algorithm, meaning the structure, genre, and length of the text, in theory, should not matter. The main risk of automatically summarizing long documents or multiple documents about the same topic is having a repetitive summary. There is an alternative version of TextRank called Lexrank that’s basically the same but involves a step right before creating the summary where highly-ranked sentences are checked against each other to make sure the final selection of sentences isn’t too repetitive. In general, abstractive methods have a harder time with both long documents and documents with long sentences due to the nature of AI ‘memory’ and the way LSTM (long short-term memory) neural networks handle the retention of past information to inform future processing. Is there a need for human intervention? Oftentimes, summary outputs made by ATS are inaccurate and require human intervention. Extractive summaries can be stilted in terms of flow from sentence to sentence and grammatical consistency. Abstractive summaries can be factually wrong and it’s likely beneficial to have someone QA it. What is the level of AI model training required to have a system that accurately summarizes texts? TextRank requires no prior training since it’s an unsupervised method. Other extractive methods treat automatic summarization as a classification problem and therefore require labeled training data, which would basically mean having several documents with corresponding human-written summaries. What are some common use cases for Automatic Text Summarization? Creating abstracts for scientific or technical documents. Condensing news into something that could fit into a push notification. Headline generation. Blurbs for curated content (i.e. you have a newsletter that sends articles, book recommendations, etc to users–ATS could generate blurbs for the material). Question-and-answer chatbots. Otherwise condensing verbose material that the audience only needs the ‘meat’ of. If you’d like to learn more about Automatic Text Summarization, feel free to reach out to cognitive-automation@cedrus.digital and we’ll set up a meeting with one of our automation experts!

  • AWS Blog: FADEL Reduces API Generation from Days to 30 Minutes on AWS

    Read about our success with Fadel on the AWS blog here: https://aws.amazon.com/partners/success/fadel-cedrus/

  • Chatbots For Highly Regulated Industries

    Customer service excellence has always been one of the primary factors for customer retention and growth. However, the recent paradigm shift in consumer preference towards messaging in business communication, and the ever-increasing expectation of an immediate response, started driving organizations to rethink their online digital experience. In a research conducted by Forrester, messaging itself ranked as the No.1 customer service channel preferred by consumers in South Korea, Singapore, India, and the US, and among the top three preferred channels across the world. Chatbots or conversational services today are either rule-based or AI-powered computer programs that interpret customer requests via natural language to predefined Intents and Actions. There is a number of AI conversational service platforms in the market today, however; choosing the right platform for your use-case or your organization as a whole can sometimes be very challenging; factors such as data privacy and security can quickly become a barrier for leveraging chatbots or implementing meaningful use-cases. For instance, in healthcare, using a cloud-based platform to build a Health Advisor that provides general health recommendations based on a member’s activity, diet, and medical history will legitimately raise data privacy concerns since sensitive PHI data will be going through a third party (i.e cloud service provider). When implementing chatbot solutions in these highly regulated industries (such as Healthcare) where data privacy and security are primary concerns, it’s imperative to look for chatbot solutions that can reside within enterprise firewalls (on-premise). This is critical because these organizations need to have complete control over all aspects of their data and they need the flexibility to enforce data privacy, security, and compliance policies. The Cognitive Business Automation team has formulated a thorough analysis of our recommended approach and best practices. In this whitepaper, we discuss how an open source / on-premise conversational service platform can leverage AI and protect sensitive data. For the purpose of this analysis, we have chosen RASA as an example and covered the following aspects: Usability: What is the ease of use and learning curve? Configuration: How to properly configure bots to meet the requirements? Data Security: Where & how is sensitive data being stored? Compliance: Is RASA HIPAA certified? Is it HIPAA compliant? Integrations: Which messaging platforms can be integrated with RASA-based chatbots? Languages: Which languages are supported by RASA? Pricing: What premium features are available? Support options? To get access to the analysis, please click here.

  • DLP in the Cloud – How to choose the right CASB

    As cloud computing platforms rapidly evolve, Information Security executives are finding it difficult to choose the right CASB technology that ensures Cloud DLP requirements are met. In this blog post, I will address the key dimensions of Cloud DLP and show what use cases are critical in order to come as close as possible to complete coverage from a data protection perspective. I will also address common issues such as Bring Your Own Device (BYOD) as well as off-network access to both sanctioned and unsanctioned services. DLP in the Cloud A Brief Lesson on the history of DLP DLP Technology has been around for some time now, but with the rapid expansion of cloud computing, requirements are changing. The original on-premise DLP products are heavily focused on endpoint protection. The goal is to prevent documents from a user’s managed device from being shared through such channels as a USB device connected to the machine, or via the corporate email system. On-premise DLP tools provide rich capabilities for analyzing document contents and searching for specific data necessary to flag the document as restricted – for example, by looking for personally identifiable information (PII) or data that could be in regulatory violation, such as HIPAA information. Based on this analysis, the tools are able to restrict inappropriate sharing on traditional vectors. The Cloud The rise of Cloud as a platform has created gaps that traditional on-premise DLP is unable to fill. Users might now create their documents in the cloud, on a platform such as Office365, and share them inappropriately with tools such as Box or iCloud. Users are also much more likely to work from home or work on unmanaged devices that do not have an endpoint protection agent. Cloud Access Security Brokers (CASB) have evolved in part to fill this gap. The Scope of Cloud Coverage This section will focus on the various dimensions of coverage that need to be addressed. I  will then look at CASB DLP features, and how they can be used in concert to meet this set of requirements. 1. User Location This encompasses the following distinct dimensions: Network connectivity: is the user connected to the corporate network, or not? (Note that this also includes VPN connected users) Physical location: a user might be working in their usual office, at home, or at some other remote location. Note that the user’s physical location could potentially have a regulatory impact, based on data protection jurisdiction (e.g. a US-based user working on a mobile device while traveling in Europe) 2. Device Protection The user could be working on a corporate-owned device that is managed and locked down, or on a personal device that is unmanaged. 3. Sanctioned or Unsanctioned Application The user could be working with a sanctioned cloud application that they need to log into using corporate credentials, such as Salesforce or Amazon Web Services (AWS). Alternatively, they could be sharing a document on an untrusted cloud service such as Photobucket or Baidu SkyDrive. 4. Corporate vs Personal instances Even if a user is working with a sanctioned application, such as Box, they could also have an individual account with the same service. The DLP requirements may be different between these two cases. In addition, as we will see, the CASB capabilities for monitoring one or the other may also be different. CASB DLP Options CASBs provide (up to) three distinct operational models for applying DLP content aware policies to cloud traffic. When selecting a CASB, it’s important to understand these models, and which requirements they address. It’s also important to look at the details of how they are can be implemented to ensure that they fit into your existing network and security architecture. 1. API Integration Most CASB’s provide the ability to set up API connectors to some number of applications, typically widely used enterprise level applications like Office365, Salesforce or Box.  A trusted connection is created between the CASB and the target application (typically using OAuth) and the CASB is able to monitor the data stored in the application, looking for policy violations. Depending on the capabilities of the API, this may be quasi-real-time (using Webhook based callbacks) or periodic (using polling). This model covers: Sanctioned applications only (there has to be a corporate account) Corporate instances only User activity from any location (it’s not inline, so no network restrictions) Activity from any device, managed or unmanaged However, note that the model is reactive – there is no way to prevent the activity from happening, but the violation will be reported, and usually quite quickly. 2. Reverse Proxy Another feature of many (but not all) CASB’s is the Reverse Proxy. As with API, this only applies to a small number of well-known applications that are specifically configured in the tool. It also requires that the application is accessed using the customer’s SAML-based single signon capability, as it is implemented by inserted the CASB into the SAML flow as a proxy. The CASB is configured to look like a Service Provider to the SAML Identity Provider (ADFS, Okta, Ping etc.) and to look like an IdP to the Service Provider (SaaS application). This model covers: Sanctioned applications only (there has to be a corporate account) Corporate instances only User activity from any location (does not rely on any network level forwarding) Activity from any device, managed or unmanaged As you can see, the scope of coverage is similar to the API model. The difference is that this is real-time coverage, in line with the user’s interactions with the tool, so now the DLP can be more proactive, and can actually prevent restricted activities from taking place. 3. Forward Proxy The final model is the forward proxy, which should be noted is the only model that covers activity in non-sanctioned applications. In this scenario, traffic that is being sent from the user’s device to a cloud application is first routed in some way to the CASB. There are several possible ways in which this routing can happen: An agent or mobile profile might be installed on the device, and used to control the traffic routing to the CASB For on-premise or VPN connected traffic, network level routing might be used to direct appropriate traffic. This might be implemented in a variety of ways, including PAC file, DNS, and Proxy Chaining from the existing Web Proxy. The capabilities are slightly different between the agent and agentless models, so we will address them separately. For the agentless model, it covers: Sanctioned and unsanctioned applications Corporate or Individual instances User activity from devices that are connected to the corporate network only Activity from any device, managed or unmanaged For the agent/mobile profile model, it covers: Sanctioned and unsanctioned applications Corporate or Individual instances User activity from devices in any location, on any network Activity from managed devices only. Forward proxy covers most use cases, other than the access to sanctioned applications from an unmanaged device that is not connected to the network. For sanctioned applications, this gap can be filled by either the API or Reverse Proxy model. But note that for complete coverage of all unsanctioned traffic from any location, you would probably need a combination of both the agent and the network routed (agentless) solutions. There are good reasons to implement both. For example, even when using the agent, there may be older devices on the network that are not able to install it (e.g. older versions of Windows still in use in some departments.) 4. Gaps If you were looking carefully, you might have noticed that there is still a gap that CASB does not cover. This is the use case where the user is accessing an unsanctioned application from an unmanaged device that is not connected to the corporate network. But in this case, we need to ask the question, how did the offending document get onto that device in the first place? If it was sent via email, or the user attached the device as a USB, then we are back in the realm of traditional on-premise DLP. This is not replaced by CASB, the two work together to provide more complete coverage. CASB Selection As you can see from this article, there are a lot of factors involved in choosing the right CASB in order to have the most comprehensive Cloud DLP coverage. Here are some of the key items to consider: Does the CASB support all three of the major operational models for DLP policy enforcement (Reverse Proxy, Forward Proxy, and API)? For Reverse Proxy and API, does it support the applications that you care about most? Does it support both agent and agentless models for Forward Proxy? Does it support network routing options that work inside your network infrastructure? Does it provide the ability to offload DLP decision making to your existing on-premise DLP engine, or does it support compatible DLP policy rules? Can it distinguish between corporate and private instances of the same application? Does it work with the set of devices that you use in your enterprise? I hope this article will help you choose wisely when selecting a CASB, and if you have any additional questions, feel free to reach out to me at paul.ilechko@cedrus.digital Interested in CASB but unsure where to start? Download The Road to CASB: 2019 Business Requirements Whitepaper. The goal of this paper is to provide a kickstart through a working set of requirements for you to leverage, and modify as needed in your search for a CASB solution. This set of requirements provides some structure on how CASBs can fit in your organization’s overall Information Security strategy.

  • The future of IoT in Healthcare

    It seems as though the Internet-of-Things (IoT) is quickly becoming a ubiquitous thread within our global social fabric. And, far beyond consumer applications of the technology, industries of all types are quickly adopting functionality to make their own business processes work better, faster, and smarter than ever thought possible. For instance, just a few weeks ago I read an article on how IoT is being used to save rhinos and elephants from poachers—if that’s not a perfect example of the reach and impact of IoT then I don’t know what is. But what about the technology that’s being used closer to home? As we move into 2019, one of the fastest growing industries to implement IoT technology is the medical industry. And with more than 962 million people aged 60 and over worldwide—comprising 13% of the global population—the need for extended care to the home is becoming a pressing argument for the adoption of new ways to connect to and care for the world’s aging population. The utilization of IoT in the medical industry doesn’t just begin and end with the elderly; in fact, far from it. IoT is being leveraged across the board for everything from remote monitoring of patients who suffer from chronic or long-term conditions, to the tracking of patient medication orders, to wearable devices, etc. Even within hospitals, IoT is being used to monitor a number of devices from infusion pumps to hospital beds with embedded sensors that monitor patient vital signs. As for the adoption of theses types of technologies, here are just a few statistics that show IoT adoption rates: 60% of healthcare organizations have introduced IoT devices into their facilities. 70% of healthcare organizations use IoT for monitoring and maintenance. 87% of healthcare organizations plan to implement IoT technology by the end of 2019. The reason behind such staggering numbers? For all institutions looking to implement IoT technology the majority cited increased workforce productivity (57%) and cost savings (57%) as key drivers. However, as with all things in the modern age the risks of using new types of technology are always present, as well as the concerns pertaining to cybersecurity. For healthcare providers, breaches are of monumental concern, especially protecting health information through compliance with the stringent regulations of the Health Insurance Portability and Accountability Act (HIPAA). Even with so many regulations in place, it has not stopped almost 90% of healthcare organizations suffering from an IoT-related security breach through IoT devices. The lesson to be learned? IoT is one of the most important aspects and drivers of modern digital transformation to date. And with that, the adoption of the technology is going to continue to grow exponentially in the coming years. Therefore, when deciding to implement any type of technology, IoT or otherwise, choosing the right partner for guidance becomes imperative.

  • AI and IoT is the future of Supply Chain Management

    AI and IoT is without a doubt one of the most important topics in business this decade. However, the scope of the topic itself seems to be mostly relegated to that of the more consumer-based aspects of the technology. Whether it’s smart home devices or the link between apps and customer experience, and so on, the general topic of conversation revolves around end-user applications. However, there is far more to be considered. As business continues to evolve, the adoption and impact of AI and IoT is nothing short of astounding. And one of the best examples of the non-consumer applications of the technology is within supply chain management. For instance, one of the best examples of how AI and IoT is impacting daily business process is through advanced asset tracking. And though asset tracking is nothing new—companies have always needed to know where their assets are while in transit—the injection of new technology is vastly changing the way assets are tracked. Now, companies not only can know where assets are at all times, but they can also know the environmental factors surrounding the goods—monitoring everything from temperature, potential impact thresholds, moisture, and more—ensuring that asset damage is greatly minimized. This means far less loss and, ultimately, a better bottom line. Of course, with asset tracking also comes far better inventory management. Using similar sensor technology, companies can now easily track the amount of inventory in stock. When quantities reach a certain threshold, management can be notified in real time that additional goods must be ordered or, in many cases, the system using advanced AI can simply place the order as needed without human interaction—greatly streamlining the process and making for a better end-user experience knowing that goods are in stock when needed. However, the benefits to inventory don’t end there. As technology becomes more and more advanced, so do the analytics that can be gathered regarding the product. Real-time data can be applied to sales and marketing: they know what products are most popular, what products need additional market push, etc. By connecting the supply chain to other business units, companies can work in real time to increase sales and to reduce expenses on goods that are not as popular—better efficiency, better margin and, again, a better bottom line. Another major benefit of AI and IoT within the supply chain management is that of corporate responsibility. And, in an age of endless information, consumers are now making informed choices on what they buy, where they buy, and who they buy from. This dynamic is now known as the difference between companies that do “well” versus those that do “good”—meaning that even though a company may do well as it pertains to profits, etc., the ethical aspects of the company in how it does good things for the environment, treats workers ethically, and so on, become paramount for future success. This is where AI and IoT become an incredibly important part of the equation. The transparency created by the technology and tracking enables businesses to show consumers the ethical nature of their company. This technological advantage immediately aligns companies with the social conscience of their target demographics, while creating a tangible market advantage against competitors. And lastly, one cannot forget about something as simple, but equally important, as maintenance. From vehicles, to ships, factory equipment, and more, AI and IoT are becoming a game changer for companies: not only reducing inefficiencies, but also greatly mitigating risk of costly downtime, and more. Instead of scheduled maintenance based on an average timeframe for a system to need potential fixes or upgrades, IoT and AI can create alerts, monitor system performance, and predict when a system or part may need maintenance before something catastrophic happens. In all, AI and IoT are being used in so many different ways that, though the average person may never realize, the impact on business is almost undefinable—making for a future that is far more efficient, far more manageable and, of course, far more profitable. Read our How IoT and AI is revolutionizing business eBook today!

  • The ever-vigilant battle of Identity and Access Management (IAM)

    In today’s always-connected work environment, knowing who has access to your company’s digital assets and, more importantly, the level of access they have is paramount. After all, one of the best lines of defense in the never-ending battle of cyber security is simply controlling the known—the unknown is a different story. The best way to tackle controlling the known is to follow the basics of Identity and Access Management (IAM)—something that never gets old, but in many instances can be trapped at a point in time, whenever that last IAM or Active Directory cleanup project was completed. The important thing as it pertains to IAM is that it’s not a “set and forget” practice. It takes ongoing due diligence that must be continually managed through organizational and technological change to ensure your digital assets are safe. But the question then becomes: Who is responsible for guarding the proverbial gates? In many cases, the practice falls directly on IT, a somewhat logical place to begin as IT security usually falls within the realm of the IT department. However, with so many changes in the rapidly evolving cloud era the idea that IT can seemingly do it all is no longer realistic. Though IT will almost always be responsible for the systems and infrastructure itself, the management of IAM can and should fall on the administrative side of the business. An easy way to look at this is through the “gatekeeper.” There are many roles such as employees, contractors, partners, and others, but many of these roles are largely managed through administrative silos. This means that the information regarding what these roles should have access to is largely managed by those in HR, Payroll, Project Managers, Application and System Owners, and so on. As new people come on board, employees move between roles and departments, and people leave or are terminated—these scenarios should fall squarely on security administration teams through a set of Identity and Access systems, along with a set of policies and procedures dedicated to IAM and all things compliance related. But like any managerial procedure the question of “how” ultimately presents itself. There are a few simple steps organizations can follow that can help with IAM. The first is that of defining those who work within your organization. Knowing who is who may seem trivial to some; however, being able to identify the difference between employees, contractors, consultants, etc., becomes the building block of a solid IAM practice. Often, there are multiple systems of record and some person types that are managed on spreadsheets in departments. Centralizing an identity store is a must. Secondly, once your workforce is defined in a way that is uniquely identifiable, the ability to manage identities will become evident, even if not simple. The best way to do so is to implement a centralized system of management for all identities within the organization. This system should provide a consolidated view of all identities, enable the management of many access types through a management console available to appropriate roles, have integrated automation that will get rid of older, out-of-date identities, orphaned identities, or simply unneeded ones. This ensures that every user account ID is accounted for at all times. Furthermore, a good place to start are the primary systems within a company’s inherent IT infrastructure, including systems such as Active Directory, Mail, and critical systems such as ERP, etc. These aforementioned systems, though not completely thorough as there are always exceptions to every rule, will provide the majority of insight in the most expedited way. And once these disparate systems have been integrated and audited, the next step will become one of ongoing access review, attestation, and access management. Implementing a company-wide program for IAM will require providing business owners with both the knowledge of who resides in the system and to grant those owners control and accountability over access. Working closely with IT to identify current access of individuals—an inventory of identities and permissions—and providing standardized IAM practices will enable business owners to then become the true custodians of systems, data, and applications as typically defined by policy. This will require implementing workflow. This can be represented by a request and approval process to ensure changes are well managed and documented, as well as enable those seeking permissions to request it through a centralized Identity Management system driven by person systems of record. It’s this centralization and workflow that will help remove IT from the decision-making phase, enabling them to concentrate on IT service delivery. A primary outcome of a qualifiable and quantifiable Identity Management system and process, driven by person systems of record, combined with well-defined procedures that outline all aspects of IAM, is that companies will inevitably become compliant with key access related regulatory mandates. This is a critical requirement as more and more organizations must adhere to strict industry and government regulations and simultaneously open the IT service landscape to the cloud. Always follow the cardinal rule of “check, re-check, and then check again.” Maintaining a watchful eye on all things associated with IAM is essential for success. As mentioned, IAM is never a set-it-and-forget-it scenario. Knowing who has access to what, when, and how is just part of the ever-vigilant battle of Identity and Access Management—a war that is never won, but can be one of success when keeping an eye on the fortress gates.

  • The Brave New World of Cognitive Business Automation

    It’s no surprise that Cognitive Business Automation has commanded the spotlight as of late. After all, it has single-handedly become the greatest driving factor of innovation for business in history—primarily due to the influx, nature and need of process-driven applications. But what does it really mean? Setting aside the over-used technical jargon that seems to get thrown into the mix, the reality is that process-driven applications were and continue to be the cornerstone of the business-to-customer relationship. However, as the human race continues to evolve through data and technology, companies must learn how to leverage customer data as a whole and embrace new ways of thinking to make every aspect of their product/service unique for each customer. As such, Cognitive Business Automation is the field that enables us to connect customer and services through data to deliver a far better and more personalized customer experience for every individual, while maintaining optimal operational efficiency. It’s a revolutionary step forward as it pertains to business process that has pulled the collective world squarely into the new era of Artificial Intelligence and Robotic Process Automation. Process-driven applications empowered by AI and supported by RPA lead to further automation, increasing operational efficiency whether through decision automation, unstructured data processing, or task automation, and will enable organizations to provide a more personalized and engaging customer experience. But what does all of this AI and business process mean in the long term for human interaction? Do machines become the rulers of the world? No . . . in fact, we are still very much needed.  In a global survey of 300 executives, Forbes Insights uncovered a fascinating indicator of the state of automation: While respondents are happy with their process automation initiatives so far, they acknowledge that manual intervention and human orchestration are still the rule rather than the exception. A brief lesson in history . . . in order to look to the future. Process-driven applications have been at the heart of Digital Transformation initiatives for many organizations for well over a decade. For instance, some of the simplest use cases of process-driven applications could be pricing an insurance claim, applying for a mortgage account, on-boarding a new customer, or fulfilling a retail order. In all of these cases, the process is largely the same: a collection of structured activities triggered by other system events or user actions (Process Inputs), governed by business rules (Decisions), set by a system or a user (Process Controls), and enabled through data services and system integration points (Process Enablers) to produce a specific Output such as a business goal. Traditionally, process engineers and solution architects relied heavily on Business Process Management (BPM) methods and tools to optimize these process-driven applications; these methods and tools helped organizations methodically identify and implement improvement opportunities to cut operational costs, reduce processing time, increase revenue and enhance the customer experience. However, traditional BPM systems fell short of supporting the innovation needed to address modern world complexities, the need for a more personalized customer experience, and the speed at which organizations wanted to achieve that innovation. These systems heavily relied on humans to perform cognitive tasks, leading to increased human errors and operational costs. Which brings us to the bright and very exciting future. Now, with next generation process-driven applications, organizations have begun to incorporate AI and RPA to gain greater operational efficiency and create more engaging user experiences. Artificial Intelligence, for example, is being leveraged to enrich business processes in a multitude of ways. For instance, Decision Automation through Machine Learning is being used to spot patterns or pattern deviations that can create insight that a business process can leverage to automate decisions handled by humans in the past (e.g. assessing the risk of a new mortgage application, automating pended claims adjudication, detecting fraudulent transactions). It’s also being used to create more well-rounded customer experiences through text analysis and natural language processing to extrapolate process action from natural language such as email, text messages, chatbots, etc. And it must be able to instantly provide the customer with a more personalized and engaging experience. As per the previously mentioned examples, companies can now enable such things as adding a dependant under an insurance policy through a chatbot or a personal voice assistant, analyze customer sentiment or frustration by translating an email and proactively reaching out, and so much more—again, all addressing the greater picture of personalization. Then of course there is the Robotic Process Automation aspect of the equation—one that presents organizations with an entirely new set of opportunities. For example, organizations can leverage RPA to automate lengthy and time-consuming repetitive user tasks. The prime use case for that is customer service groups within any organization. Customer service agents usually need to repeat the same task for different customers. When these tasks are automated with bots, it can dramatically reduce processing time and operational costs. But far beyond that, there is also the ability to leverage RPA to perform non-invasive integrations. Most mission-critical enterprise systems are still legacy systems that are inaccessible outside a user-interface or command line and require lengthy modernization initiatives to become accessible. RPA enables organizations to take a non-invasive approach by mimicking human steps and eliminating the need to modernize large monolithic systems. In all, Cognitive Business Automation is changing the world as we know it. As process-driven applications empowered by AI insights, Natural Language Processing and Robotic Process Automation take hold. The results are already countless. From expanded automation possibilities within any business process (whether through decision automation, unstructured data processing, or task/system automation) to reduced operational costs, cycle time and errors due to minimal human intervention, organizations of all types are saving money while becoming more efficient. And though traditionally business process management tools have in some cases negatively impacted the end-user/consumer experience, cognitive business automation has vastly reversed that old-school notion. In fact, these same technologies that are saving companies time and money are also making the customer experience better. From enhanced customer experience due to the prompt and personalized responses, to newly created services and products all thanks to the intersection of workflow automation, artificial intelligence, and robotic process automation—the possibilities are seemingly endless. The only question now becomes one of possibilities and imagination. Would you like to learn more about BPM, AI, and RPA and how they work together? Click the link below to watch our webinar on-demand to see firsthand how a Fortune 100 healthcare organization is leveraging this technology to save money and create a better customer experience simultaneously. View Webinar On-Demand

  • How to leverage design thinking to discover what your users need

    Design thinking emerged in the 1950s and 1960s when the architecture and engineering fields grappled with the challenges arising from widespread social tensions. Today, design thinking is the cornerstone for some of the largest companies in the world. Prestigious universities, business schools, and forward thinking companies have adopted design thinking, occasionally tailoring their approach to fit their specific context or brand values. Even so, some companies are only just transitioning to a more design-centered approach, while others have yet to discover the long-term benefits. The structured framework of design thinking consists of five phases: Empathize, Define, Ideate, Prototype, Test. These phases are used to tackle either unknown or ill-defined problems and are being reframed in human-centric ways to enable the participants in the process to find out what users really need. Within the design thinking framework, it’s critical to get feedback from your users through things like research, prototyping, and usability testing. This can show you why and where users are having problems. It can also allow you to judge whether holding a more structured design thinking workshop would add even more value to the company and its customers. Companies and individuals who employ design thinking often customize workshops to create a collaborative environment in which to explore assumptions that will lead to potential solutions. These workshops can last anywhere from a few hours to a few days and can help to quickly identify users’ needs and pain points, resulting in an innovative and testable solution. The challenge, of course, is to choose the correct approach to design thinking, one that will fit your company’s specific context. Three leaders in their respective fields—Google, IBM, and Amazon—developed their methods and workshops through their interpretation and approach to design thinking. Then they used these interpretations to interact with their clients and ensure that the final outcomes met their clients’ specific requirements. Each respective method and workshop focuses on identifying a problem with a team of people from varying disciplines. The goal is to brainstorm many ideas before converging towards one potential solution. However, key differences exist in how they define, develop, test, and build the solution—no hard and fast rules exist to define the workshops. Google, IBM, and Amazon developed their workshops based on available resources, time frames, and the business opportunity or challenge under discussion. Companies are diverse and spread across many industries. They do not work on similarly sized problems, or even on problems with the same level of complexity. Therefore, design thinking workshops of varying structures are required to meet the differing needs of this diverse landscape. The question then becomes: What should you look for in a design thinking workshop to ensure you identify the correct challenge and reach the right solution? When you consider that you are committing your resources and employees to solve a critical company challenge, you need to ensure that your investment reaps the expected reward: a viable solution to the problem. Let’s take a look at some of the key elements in a design thinking workshop: A focus on empathy is essential. One of the goals of the design thinking workshop is to ensure that all participants acquire an empathic understanding of their end user, as well as the problem that needs to be solved. A structure needs to be laid out using a classic approach: identify a problem, brainstorm many solutions, converge on the best option, and then define a goal to be tested, iterated, and implemented. The workshop should include a review of the problem so that everyone is on the same page. It should also identify the people involved, their goals, and why you should care about them. In addition, the users’ journey should be mapped out so that it’s easier for everyone to empathize with them. The benefits of good design thinking workshops are immeasurable. You’ll view problems from a different perspective, stimulate innovative thinking and creative problem solving, ensure final outcomes meet the company’s specific requirements and, of course, encourage empathic interactions among your employees. Read Our Empathy: essential element in Design Thinking Methodology eBook Today

bottom of page