top of page


53 items found for ""

  • Introduction to DynamoDB and Modeling Relational Data (PART 3)

    Dealing with Relational Data With the basic structure of DynamoDB in mind, how do we go about building an application that has related data that would normally be split up in a relational database? For this example, imagine we have a software consultancy firm with a number of employees working on projects for clients. In addition, we want to track the technology used within these projects. Because everyone loves a good entity relationship diagram, the database may be structured like this: Approaching the Problem When using a relational database, these entities would be normalized into different tables, and we would use SQL queries to combine them dynamically into views our application can use. DynamoDB tries to reduce the computational overhead of such queries, and instead tries to store data in the format it would be consumed. In fact, according to AWS a well-designed application requires only one table! How can this be so? If we tried to use Dynamo like RDBMS You may imagine that we would create an individual table for clients, projects, employees, and technologies, however, this pattern doesn’t leverage DynamoDB in the most effective way. If we have a scenario where we need to delete a client, we would want their projects deleted as well. If we used multiple tables, our process would look like: Find all client projects Delete all projects Delete the client Problems: Multiple API calls Very error-prone as code gets refactored and extended We lose all advantages of working with a NoSQL solution and have to deal with all the disadvantages New Application Requirements Let’s rethink the way our application is structured to leverage a NoSQL paradigm instead. If we use partition keys, sort keys, and indexes effectively, we can model that relational data in a way that allows complex queries, deletions, and insertions. Our new table may look like: Note how the partition key is the same for both projects and clients, but the sort key is different. This allows you to set up relations to different items by querying on PK for the equivalent of a join. This pattern of design is called the Adjacency List Design Pattern, and you can read more about it in the DynamoDB context here. Review Modeling Relationships Just like relational databases, there are guidelines for effectively dealing with one-to-one, many-to-one, and many-to-many data. We store related items close together using partition keys. Taking advantage of Sort Keys The sort key lets us control the granularity of our data and the order in which it’s returned. Partition keys often tell us what “cabinet” or “bucket” our item belongs to. Sort keys are like folders in the cabinet, containing files. Access Patterns Unlike relational databases, we need to know what kind of queries we’re going to execute in advance to understand how to store our data. We design our keys and any additional indexes based on the business questions we expect to ask most frequently. Conclusions Relational database systems (RDBMS) and DynamoDB have different strengths and weaknesses. RDBMS Pros Ensures schema and relationships in data Easier to maintain transactions Data can be queried flexibly and store data efficiently Cons Must maintain schemas during development and migrate changes Queries are relatively expensive and don’t scale well in high-traffic situations (see First Steps for Modeling Relational Data in DynamoDB) DynamoDB Pros Highly available and fast Scales automatically Schemaless so easier to change Cons Need to rethink normalization and consistency Must understand the queries and materialized views when designing Can only sort on sort keys or secondary indexes As with any solution, there is no silver bullet. There are many ways to solve the problems at hand, and we try to find the solution that fits that need the best. For more information on DynamoDB and best practices, please visit: Click Here to Read Part 1 Click Here to Read Part 2

  • Introduction to DynamoDB and Modeling Relational Data (PART 2)

    Getting started with DynamoDB AWS makes it very easy to get started with DynamoDB. Simply log into your AWS account and navigate to DynamoDB > Tables. Click Create Table Enter the table name and the name of the partition key and an optional sort key if you wish (more on that later) Click Create Congratulations! You have created your first DynamoDB table. No EC2 provisioning, ports to configure, or schemas to set up. You are ready to start inserting data into your new table. Basic anatomy of a DynamoDB Table DynamoDB is a NoSQL or non-relational database and handles things very differently. It would fit more closely into the category of a key/value store rather than a document database like MongoDB. While DynamoDB is able to store items in a familiar object structure, the way that data is accessed is very different, so designing your keys is an important first step. Partition / Hash Keys A partition refers to the physical location in memory where an item is stored. That means related items are stored close together. A great way to visualize a partition key is like a filing cabinet drawer. When looking for your document, the partition key lets you know which drawer to look into. For simple data, the partition key may represent the primary key, but for more complex and related data, the partition key will help group related data together. Sort / Range Keys A sort key is used to sort the data within a given table as well as provide uniqueness as a composite key combined with the partition key. A great way to visualize a sort key is to imagine the alphabetical folders within our file cabinets. The documents are sorted by alphabetical order within the drawer. How are they used? When making a query, the partition key is used to identify the exact node the data resides on before filtering the rest of the query. This has powerful implications. As your data grows, if your partitions are well distributed, it means your requests will be just as fast regardless of how much data is in your table. A query can be made on the partition key by itself, or you may provide a sort key to help further limit the items returned. The sort key serves two purposes: sorting and searching. You can do a partial search on a sort key, unlike a partition key which must be a full match. The sort key can be used to sort your items by ascending or descending order. One limitation of DynamoDB is that the sort key is one of the only ways to sort your data. You cannot sort on every attribute. More on that with indexes. In addition, you can only have one partition key and one sort key per base table. Scan vs Query A query takes a very targeted approach to identify the partition the data resides on before conducting a filter. Queries are the preferred method for accessing your data because they are quicker, and they use less of your RCUs. However, not all searches can be conducted with a query. If you don’t know the partition key and want to search for a specific attribute value, you will have to use a scan. Scans crawl over each record in your table and collect all items that match the filter expression. This is an expensive operation and should be used sparingly. Approaching the Query Problem We may have different scenarios where you need to sort on additional fields, submit queries with different partition keys, or allow for searches on other fields. How do we handle those scenarios with the limitations mentioned above? Indexes, of course! A Tale of Two Indexes We can have our cake and eat it too by defining a new index. These indexes are a lot like an index in a book. There is a map of the contents, so we can find things quickly. Let’s imagine a table where we stored books, the employees who checked them out, and when. 1. Global Secondary Indexes Global secondary indexes allow you to define a new partition key and a new sort key. When making a query, you select the index you wish to use along with the partition key and sort key you want to query on. GSIs create a copy of the base table, and this copy is maintained seamlessly in the background. You can create GSIs after the table has been created, but you are limited to 5 GSIs on a table. There are some patterns you can use to optimize your GSIs such as overloading. 2. Local Secondary Indexes Local secondary indexes allow you to create additional sort keys on the base table. They use the same partition key as the base table but provide additional attributes to sort or search on. Unlike a GSI, LSIs can only be created when the table is created. The sort key of the base table is projected into the index, where it acts as a non-key attribute. Because LSIs use the base table’s partition, they are limited to 10GB per partition key request. Like GSIs, you can only have 5 LSIs per table. Click Here to Read Part 1 Click Here to Read Part 3

  • The Critical Need for Cloud Security in Modern Business

    It’s no surprise that the cloud has forever changed business as we know it. The ability to now access corporate information from virtually anywhere has created a huge difference between how we once managed our daily work lives and how we manage them today. However, with all the benefits cloud has to offer, there are also the inherent risks associated with effectively managing digital assets. For most companies, the most challenging aspect of the cloud seems to be defining the processes needed to first identify the risks and then the path to mitigating those risks through actionable policies and tools. Creating a more secure cloud environment can be broken into three distinct categories (at least for this article). Step 1. Identifying the Risks This is probably the most logical place for most companies to start: Identifying where the most risks reside. However, in many cases, there is always more than meets the eye. First, companies need to look at the risks inherent in the essence of cloud computing: the concept of Bring-Your-Own-Device and what impact that can potentially have on an organization. For instance, unmanaged devices can present immediate control gaps within an organization—the potential for insider data leaks—that can allow data to be downloaded or moved to unknown/unsanctioned application and storage locations. Furthermore, there are also issues when enabling flexible and remote workforce scenarios that require native application use—all of which must be controlled and monitored for data exfiltration. Then, of course, there are compliance violations. Under several regulatory mandates, such as the Gramm-Leach-Bliley Act (GLBA), New York State’s Department of Financial Services (NYDFS) Cybersecurity Regulation, and HIPAA (Health Insurance Portability and Accountability Act of 1996), to name a few, organizations must protect Non-Public Information (NPI), Protected Health Information (PHI), or Personally Identifiable Information (PII) from unauthorized use. This means that to control access to, and protect sensitive data, its location must be known or discoverable and manageable. Potential exposure from breaches associated with cloud service providers. Though this may sound odd, there are still many companies that think their data is safe and being looked after by a large, third-party provider. Now, though most enterprise-class cloud service providers have excellent security, an organization must also come to realize that these big providers are also targets and could become compromised. There is an element of rogue IT that must be addressed. Aside from sanctioned vendors that organizations may choose to do business with, what about the unsanctioned ones? Every day, we find our customers’ employees using unsanctioned cloud applications. For instance, in a recent audit we detected more than 700 apps in use by the employees of one of our customers. More so, in this particular case, there was nothing nefarious. It literally came down to unknown terms and conditions that may be susceptible to leaking sensitive data. Step 2. New Rules and Processes So once the gaps are identified, what next? Simply put, the need to implement security best practices, including policies, standards, guidance, and process, becomes paramount. It’s this governance that will create and monitor the rules of engagement for the company and its employees to ensure that everyone is aware of how important security is, and how to live by the rules every day. Step 3. A New Age of Technology Then there is the technology itself that needs to be addressed. This can take shape through a multitude of ways. For instance, the organization can leverage Cloud Access Security Broker (CASB), which is required to centralize access, manage compliance, and deliver the actual data security for the cloud. Then, of course, there is the implementation of a key management platform to centralize key generation, rotation, and data destruction within the cloud itself. Finally, there are the most critical management layers—the Cloud Identity Governance strategy—required to incorporate Federated SSO (FSSO), Identity Provider (IdP), Privileged Access Management (PAM), and Access Governance into business-critical cloud applications. And then the things that actually manage threat events for cloud services: Centralized Security Information and Event Management (SIEM) and logging and User and Entity Behavior Analytics (UEBA). In all, this isn’t the easiest path to take for any organization regardless of size or resources. However, the lesson here is that it still needs to be done as the threats are very real, with lasting consequences. If it seems complicated, that’s okay—there are companies such as ours that help the biggest of enterprises secure their respective clouds every day. The most important thing here is to make a choice as soon as possible to implement real cloud security before the worst-case scenario happens.

  • Introduction to DynamoDB and Modeling Relational Data (PART 1)

    Introduction DynamoDB is a powerful data persistence offering in the AWS suite that allows for highly scalable data access. It’s quite simple to get started using DynamoDB, and there are a good number of documents on the topic including Lambda and AppSync integration. While it’s easy to get started, modeling complex data can sometimes be challenging to visualize, especially coming from alternative systems like relational databases. What we will and will not be covering: Will: Conceptual data modelling Dealing with relational data Comparison to relational databases and SQL Will not: Specific DynamoDB APIs or SDKs (you can use CLI, JS, C#, and others) Authentication, authorization or access control The Relational Way Let’s cover some of the relational database concepts we know to help us better contrast how DynamoDB changes many of the patterns we are used to. SQL / Normalization RDBMS (Relational Database Management Systems) generate or materialize views dynamically from a normalized and optimized version of the data. SQL is the language of choice when making these queries. Normalization is designed to keep the data consistent and reduce redundancy. This often means spreading data across multiple tables to ensure the data is only entered in one place, then linked using complex SQL queries. This helps avoid insertion, update, and deletion anomalies, allows flexible redesign and extensibility, and helps conserve storage space. Where the ORM (Object Relational Mapper) comes in When looking at normalized data within a database, it becomes very hard to understand the relationships at a glance without having some way to group things together that makes sense to the human mind. Objects just simply make more sense when approaching a problem. ORMs were designed to help us bridge that gap and hide much of the normalization and SQL under an object grouping that makes sense to the application developer. Database Chemistry When dealing with RDBMS, we tend to think of large, monolithic data stores. When working with these stores, we can control the transaction from beginning to end. However, as we start to expand into more distributed systems and scale out, it becomes much harder to efficiently maintain transactions the same way we used to with the ACID paradigm. As a refresher: A: Atomic – Tasks are all performed or none of them are. If any one fails, the entire transaction fails. C: Consistency – The database must remain in a consistent state, meaning there are no half-completed transactions. I: Isolation – No transaction has access to any transaction in an unfinished state, and each transaction is independent upon itself. D: Durability – Once the transaction is complete, it will persist such that power loss or system breakdowns don’t affect the data. When we have a largely distributed and scalable system, it becomes very challenging to follow all the rules above. As distributed systems started to become more popular, the CAP theorem was coined to describe the limitations of these systems. In a nutshell, you get only two, not all three. C: Consistency – Do all nodes in your cluster access data like they’re supposed to? Do they reliably follow the established rules of the given system? A: Availability – Is the service available upon request? Does each request get either a failure or success response? P: Partition Tolerance – The system continues to operate, even when there is data loss or node failure in parts of the system. So, according to the CAP theorem, you can have consistency and availability, availability and partition tolerance, consistency and partition tolerance, etc., but you can never have more than two of the above. With the CAP theorem in mind, a new end of the data spectrum was added to complement ACID, like in our pH scales: BASE. BA: Basically Available – There will be a response to every request, but that data could be in an inconsistent state or return a failure response. S: Soft State – Data may change over time, even without input due to the eventually consistent patterns used to propagate data. E: Eventual Consistency – Once the system stops receiving input, it will eventually become consistent. This does not happen immediately, hence why it is considered eventually consistent. These paradigms are at odds with one another, and they provide context to different kinds of systems and their needs. In the past, storage was very expensive, and relational databases optimized for that constraint. Today, storage is relatively cheap, while compute is where most of the cost goes. SQL queries can be quite computationally expensive to join complex data together into views our application can use. DynamoDB turns many of these concepts on their head by recommending storing data in ways that are the optimized to limit computational costs such as duplicating data and keeping data less normalized. Enter DynamoDB DynamoDB is designed to scale to levels of multinational always-on systems like Amazon run. The main focus of DynamoDB is its availability and scalability. To meet these needs, DynamoDB works very differently from a relational database. DynamoDB in a Nutshell You can store a lot of items (no limit) It’s blazingly fast Very scalable (options to scale automatically) Good for most apps where we know the kind of business questions we will ask ahead of time and the aggregated structures are well-known NoSQL, so no schemas to maintain and update Capacity and Scaling The main focus of DynamoDB’s pricing model revolves around capacity rather than the hours consumed or storage used. After 25GB, you are still charged for your storage usage, but you are not charged for hourly EC2 instances running, only the requests or capacity used. Let’s take a moment to break down what each of these means. WCU: Write Capacity Unit 1KB write / second Example: A 3KB document would take 3 WCUs Calculation: (# of docs * avg size / 1KB) RCU: Read Capacity Unit 2 4KB eventually consistent reads / second 1 4KB strongly consistent reads / second Example: An 8KB document would take 2 RCUs for a strongly consistent read Calculation: (# of docs * avg size / 4kb * ( ec == true ? 2 : 1 )) Scaling and Replication Automatically replicates to 3 availability zones within a region Replication is ensured on at least 2 AZs before a write is considered complete Automatic scaling is an option See: Click Here to Read Part 2

  • Smarter Things | Technology choices for developing an IoT solution

    Planning a Smart Solution Introduction This is the story of a journey that turned my ancient A/C window unit into a smart device that can integrate with Alexa, a dashboard, and a native app. However, this story isn’t only about a single outdated device entering the era of the cloud, it’s about the power and flexibility provided by AWS and its suite of IoT related services. In our first expedition into the “smart” world, let’s take a step back and look at how to architect a smarter solution. Setting Goals At Cedrus, we have recently been working on an end-to-end solution to monitor and control IoT devices using (mostly) AWS. Before we began, we used design thinking to clarify our main goals: Allow users to control devices through multiple user interfaces (Alexa, web app, native app) Report energy consumption for devices when possible Display historical energy usage in a meaningful way Ensure that security is robust and impenetrable throughout our entire stack As much as possible, create a technology agnostic solution Train machine learning algorithms to detect anomalous device behavior; for example, send a push notification when a freezer door has been left open on a refrigerator connected to a smart plug With these goals in mind, research and testing led us to our ideal solution built around Greengrass and openHAB. Let’s dive into some of the reasons we chose these technologies. Why Greengrass? Greengrass, a relatively new service provided by AWS, is an incredibly powerful tool for several reasons. First and foremost, Greengrass takes the power of IoT Core and moves the business logic to the edge. Greengrass allows a device to run a long-lived lambda function locally on a gateway device. For our purposes, this meant we could develop a Node.js application to control devices and manage/report state. Greengrass also grants the ability to perform OTA (over-the-air) updates, making our code scalable as an enterprise solution. Patches and new features can be rolled out without having to physically access the gateway device. Furthermore, integration with every AWS service is seamless. Data can be captured and piped for IoT analytics, SNS push notifications can be triggered directly from the gateway device, and CloudWatch logs make debugging a breeze. Additionally, the robust security provided by Cognito and IAM prevents bad actors from accessing any valuable information in the cloud or turning a smart home into a fleet of malicious bots. As an added note—and a bit of personal commentary—Greengrass is simply fun to use. If you’ve ever wanted to get a closer look at the life cycle of a lambda function, or you’re excited by the prospect of easily running a Node.js app on a Raspberry Pi, Greengrass is sure to fill your nerdy heart with joy. Why openHAB? First, it’s important to understand the popular protocols used in new smart home products: Z-Wave and Zigbee. Both of these low-energy radio wave protocols were developed to compete with WiFi (which consumes too much power for many IoT devices) and Bluetooth (which has a very limited range). We chose to develop our solution around Z-Wave because, when compared to Zigbee, it has more universal compatibility, better security, a longer range, and won’t interfere with WiFi. In order to run Z-Wave devices, you need a hub to control the system. This is where openHAB enters the picture. OpenHAB is described as “vendor and technology agnostic open source automation software for your home.” OpenHAB was our clear solution for several reasons: While we had chosen Z-Wave for development purposes, openHAB also supports a Zigbee binding allowing for greater future extensibility. openHAB has a very active community and is well maintained. In fact, when the smart thermostat we purchased wasn’t integrating properly with the openHAB Z-Wave database, Chris Jackson, a maintainer, was incredibly prompt in responding and fixing the issue. The fact that openHAB is popular and open source means that as future smart products are released, openHAB support will almost certainly and quickly follow. Finally, the most important factor in our decision to use openHAB was the ability to access exposed API endpoints that report and control the state of all connected devices. This allowed us to easily integrate the Z-Wave protocol into the long-lived lambda function running our Node.js app on the gateway device. Conclusion Hopefully the first part of this blog series provided some insight about the main technology choices we made while developing our IoT solution at Cedrus. Greengrass and openHAB may comprise the backbone, but there are many more AWS services and technologies in the mix. In Part 2 of our blog series, we’ll discuss in technical detail how a user action moves through the cloud to control the state of our device. After all, we’ve only just started on our journey to turn a dumb A/C unit into a “Smarter Thing” …

  • A cloud for those who can’t move to the cloud

    Why IBM Cloud Private (ICP) could be the perfect option In today’s ever-connected business environment the cloud has come to represent the cornerstone of differentiation and competitive advantage. For most companies, it represents the environment where the majority of their business processes and applications reside. However, for many companies the public aspect of cloud computing is something that’s not a possibility: from regulatory challenges and more, the option is simply not available. That said, there are ways to experience all the benefits of the cloud without moving to a strictly public option—leveraging a possible private or a hybrid option. For instance, IBM’s private cloud offering (known as ICP) delivers all the benefits of public cloud while remaining compliant with a multitude of industry-specific regulations—enabling users to effectively manage enterprise workloads with the benefit of extra controls for security. So, why is the cloud so important? Trust me when I say I’m quite aware of that question being somewhat odd in this day and age. Most understand why the move to the cloud up until now has been so important. But that’s not where cloud ends; in fact, that’s actually where it begins. The need for companies to embrace the next phase of cloud—known to most as digital transformation—will come to represent the next evolutionary stage in computing as we know it. The cloud will now be the epicenter for all modernized business process to leverage everything from Artificial Intelligence (AI) and machine learning, to the Internet-of-Things (IoT), Digital Assistants and, of course, the new gold standard of business: ubiquitous data. In fact, it’s data that is the single greatest driving force of the new millennium—leveraged to achieve everything from better and more personalized end-user experience, all the way through to supreme business intelligence. It’s for these reasons alone that organizations burdened with industry-related regulatory challenges must still embrace the cloud, albeit on their own terms. In the case of ICP, IBM has created a Kubernetes-based container platform that can help organizations to quickly adopt cloud-based infrastructure, enabling them to modernize and automate workloads. More importantly, it enables users of the platform to build new and highly innovative cloud-native applications to remain relevant and competitive in a world driven by cloud infrastructure. In this instance, the only difference is that the development and deployment of new applications takes place in a highly secure, private infrastructure within one’s own data center or hybrid model, mitigating risk of potential security concerns associated with other public cloud options. For example, one of the most prevalent business challenges today is the need to modernize traditional applications. And, as our digital world continues to evolve, the demand on companies to enable better and more efficient scalability and resilience is paramount. Of course, like any modern IT challenge, that’s always easier said than done—but there can be light at the end of the tunnel. ICP’s catalog includes containerized IBM middleware that is actually ready to deploy into the cloud—a benefit for those that dread the perceived long and arduous path to cloud readiness. In the case of ICP, containerization dispels those concerns, enabling users to avoid the trappings of application-specific breakage points when modernizing monolithic and legacy applications. By doing so, it can also reduce downtime by enabling users to simply address and isolate application interdependency issues on a singular basis without having to schedule downtime for an entire system. So what does this truly mean for highly regulated business models? Simply put, cloud on its own terms. It’s clear that public cloud offerings enable far more options in planning one’s own data center environment. The elasticity afforded by the big players such as Amazon AWS, Microsoft Azure, and more, is obviously of great value. In fact, it is the single biggest reason that many organizations—private and public sectors alike—are choosing to move more infrastructure to public options. This choice makes IBM’s private cloud offering shine. Whether an organization chooses a fully private, a hybrid, or a fully public model for its cloud deployment, managing High Availability (HA) workloads becomes far easier in any case. And with the ability to provide a single deployment method and DevOps process at any time—in case the organization decides to deploy to the public cloud and on-premises data center simultaneously—it makes it an easy choice. In the end, the cloud is here to stay; consequently, a business of any nature requires modernity to exist. Knowing that fully public cloud options are not for everyone, it’s nice to know there is still a cloud option available that can deliver the same benefits—but on one’s own terms.

  • Cedrus Digital Transformation Solutions Announces New Partnership with Twistlock

    Partnership to deliver automated and scalable container cybersecurity to enterprise clients September 13, 2018 — New York City — Cedrus, a leading provider of enterprise-class Digital Transformation Solutions, announced today that it has recently joined cybersecurity company Twistlock as a member of the Twistlock Advantage Program. This new partnership will bring the benefits of the Twistlock cloud native cybersecurity platform to Cedrus’s already comprehensive cloud security practice, used by many of North America’s largest companies. With the addition of Twistlock to Cedrus’s already robust security portfolio, the company will now be able to offer yet another highly unique solution: a comprehensive automated and scalable container cybersecurity platform, one that encompasses precise and full lifecycle vulnerability and compliance management, as well as application-tailored runtime defense and cloud native firewalls. In short, the solution delivers the ability to secure containers and modern applications against the next generation of threats across the entire application lifecycle. “We are incredibly pleased to have forged this new relationship with Twistlock,” commented Mike Chadwick, Senior Vice President of Sales and Business Development at Cedrus. “For so many companies, the traditional approach to cybersecurity is something that has always been slow and cumbersome, with security teams having to resort to manual processes to mitigate risks against attacks. Twistlock has solved that challenge. With the first ever purpose-built solution for containers and cloud native security, companies can now benefit from a technology that encompasses unique cloud native technology paired directly with DevOps-related practices to provide exceptional security. We have no doubt that this will be highly beneficial to all of our enterprise customers, and beyond.” “Twistlock is trusted to protect the cloud native applications of hundreds of leading organizations worldwide. As we continue to scale our operations, it is increasingly important to partner with companies that deliver solutions for the digital age,” said John Leon, VP Business Development and Alliances at Twistlock. “Cedrus is a leader in Digital Transformation Solutions, delivering the trifecta between digital process, cloud native, and cloud security. This new partnership will result in stronger security solutions for our joint customers.” The new founded partnership enables Cedrus to deliver a multitude of security benefits to its customers, including: Integration with any Continuous Integration (CI) tool and registry to provide unmatched vulnerability and compliance scanning and enforcement for container images, hosts, and serverless functions The ability to automatically learn behavior of images, microservices, and networks to whitelist known good behaviors while preventing anything anomalous Protecting running applications with layer 3 and layer 7 firewalls reimagined for cloud native environments and powerful runtime defense and access control, providing in-depth safeguarding to prevent next-generation attacks In addition to the Twistlock partnership, Cedrus continues to be a highly sought-after Digital Transformation Solutions provider within North America and abroad. The company has created a reputation for being able to solve highly complex business challenges for many of the world’s largest healthcare organizations, financial and insurance companies, and more. By leveraging expertise in digital process automation, cloud native technologies, highly advanced cloud security expertise, and its design thinking methodology—paired with the latest in “smart” technology, including artificial intelligence and robotic process automation—Cedrus continues to help organizations of all types drastically increase productivity and reliability, all while greatly reducing operational costs. For information regarding the Twistlock partnership, or to learn more about Cedrus Digital Transformation Solutions, please visit Media Relations Contact EyeVero Marketing Communications Group +1 613-260-3037 ext. 507

  • A Faster Future: Leveraging AWS for DevOps

    The future is fast. The life cycle of an application has gone from months to days. For DevOps teams, this dictates the necessity of several vital practices: migrating from monolithic architectures to microservices, developing CI/CD pipelines, and building infrastructure as code. AWS DevOps tools allow us to speed the process of development to delivery while maintaining these newly established best practices. Let’s take a look at how we can use AWS CodePipeline with CloudFormation. We’ll cover the steps for: Connecting CodePipeline to Github as a source Utilizing CodeBuild within a pipeline for testing/linting a Node.js application Connecting our pipeline to GitHub for Source Control CloudFormation allows us to embrace infrastructure as code. In our CloudFormation template, we can define a CI/CD strategy with a highly adaptive CodePipeline, which pieces together our source code, CodeBuild projects, Lambda functions, and a CodeDeploy application/deployment group. Here’s a look at how we use GitHub integration when defining our CodePipeline in a CloudFormation template: CodePipeline: Type: AWS::CodePipeline::Pipeline Properties: Name: Example-CodePipeline Stages: - Name: Source Actions: - Name: SourceAction ActionTypeId: Category: Source Owner: ThirdParty Provider: GitHub Version: 1 OutputArtifacts: - Name: SourceCodeOutputArtifact Configuration: Owner: Cedrus Repo: Demo Branch: Prod OAuthToken: !Ref GitHubToken RunOrder: 1 Now, when a branch is merged into the GitHub repo named Demo’s Prod branch, our pipeline will be triggered. CodeBuild: Containerized Testing CodeBuild is a very powerful tool for several reasons. It eliminates the need to provision, manage, and scale our own test/build servers. CodeBuild is fully managed, almost limitlessly scalable, and flexible for the needs of any given application. It also runs on-demand, saving money and resources. Here’s a look at how we can define a testing environment in a CloudFormation template. Our BuildSpec allows us to run commands during specific lifecycle hooks within a containerized environment that we define. TestCodeBuildProject: Type: AWS::CodeBuild::Project Properties: Name: Sample-Test-Env Description: CodeBuild project for running Node.js Mocha tests Artifacts: Name: Test-Artifacts Type: CODEPIPELINE Packaging: NONE Source: Type: CODEPIPELINE BuildSpec: | version: 0.1 phases: install: commands: - npm install - npm install -g mocha pre_build: commands: - npm run lint build: commands: - npm test TimeoutInMinutes: 10 Environment: Type: LINUX_CONTAINER ComputeType: BUILD_GENERAL1_MEDIUM Image: aws/codebuild/nodejs:6.3.1 Along with unit tests, which will terminate the pipeline if failed, we are also able to check for any linting errors. This means best practices must be observed throughout the entire team. With internally defined rules, we are able to ensure our code is meeting strict quality standards even as the pace of development quickens. Conclusion Effective CI/CD pipelines are a key element in the fast paced world of DevOps. Here are a few things we learned while integrating CI/CD solutions with CloudFormation, CodePipeline, CodeBuild, and CodeDeploy. GitHub integrates directly with CodePipeline, but BitBucket or other code sources require an extra step of configuration. For source code that exists outside of AWS or Github, we can configure a pipeline that uploads a compressed file of the code base to S3 (note that versioning must be enabled). CodePipeline can track this S3 object to trigger a pipeline when new code is uploaded. Code as infrastructure makes it easy to move through environments from dev to test and production. CloudFormation is a powerful tool for creating predictable, tested, secure, repeatable environments. CodePipeline allows for continuous delivery or continuous deployment depending on the needs of an application. The ability to add a step that requires human approval means that the whole process can be automated but deployment requires administrative consent before changes move to production. For CI/CD to work well, there must be extensive test coverage. CodeBuild is a powerful tool, but it’s only as effective as the tests it runs. Robust unit/integration testing can eliminate any potential hiccups in deployment. A little bit of configuration up front can save a lot of time and energy in the long run. These methodologies have become best practices for a reason: using infrastructure as code, leveraging CI/CD pipelines, and embracing microservice and serverless solutions. These core concepts make development as fast as the future demands.

  • Why AI and IoT have become imperative for business success

    It’s no surprise that artificial intelligence (AI) and the Internet of Things (IoT) have become the focus of businesses around the globe. The fact that this emerging technology will be the single greatest driver of business success in the decades to come is more than proven at this point—it’s a reality. More so, the numbers that support the decision to embrace and implement digital transformation initiatives are downright staggering, to say the least. For instance, let’s first consider the single greatest historical driver of technological change: the smartphone. In just one decade, this ubiquitous mobile device, and what it represents, has changed how the majority of people around the globe view technology and its impact on daily existence. After all, with the advent of the smartphone came the mobile app, the direct representation of a smart device’s highly personalized features and functionality. To put the smartphone’s impact on the global economy in context, it is projected that by just next year, the total number of mobile phone users in the world will surpass the five billion mark. Add to that the staggering statistic that, in 2018 alone, consumers downloaded 205 billion mobile apps to their connected devices—a number that, by 2022, is expected to surpass 258 billion downloads. However, mobility is not where this ends—not by a long shot. Paired with the smartphone, and the seemingly endless number of apps available to download, comes the introduction of the IoT. From thermostats, cameras, and home-monitoring systems, to city infrastructure and wearable personal devices that track health—and even automobiles—apps and mobility are directly related to the IoT. All of these “things” represent a far greater end-user experience, one that is customized to each user. By doing so, users now have the ability to create the experience they want—far from the generic brand experiences of the past where everyone received the same experience regardless. It’s this new approach that the IoT offers that companies such as Amazon are wholeheartedly embracing, a customer-first approach that will set the stage for a new way of doing business. Jeff Bezos, founder and CEO of Amazon put it best: “We see our customers as invited guests to a party, and we are the hosts. It’s our job every day to make every important aspect of the customer experience a little bit better.” In this case, the IoT and supporting technology enables Amazon to use ongoing data to personalize real-time experiences, all based on browsing and buying history—a lesson to be learned by all companies looking to take a truly customer-centric approach. And, just like the staggering numbers that illustrate the growth of mobility, the rise of the IoT is right behind it. As recently reported by Gartner, the total number of IoT devices being used across the world will reach 20.4 billion by 2020, a staggering figure unto itself considering that it is more than double the number from 2017 (approximately 8 billion). Furthermore, the evolution of mobile and the IoT is now falling squarely into the new realm of AI. And with that new realm comes the introduction and mass adoption of digital assistants such as Amazon’s Alexa, Google Home, and Apple’s Siri, all of which are supported by AI. It’s this pairing of new world order technology that enables the very essence of outcomes: a five-part process defined by Sense > Transmit > Store > Analyze > Act. This series has come to define the IoT, AI, and digital transformation as a whole—the ecosystem that business must adopt in order to succeed in the months and years to come. Why such a short timeline as it relates to needed adoption? The reasoning is simple. When calculating the evolution of mobility—and subsequent app adoption—for work and lifestyle use, the digital assistant becomes the next logical step, creating a literal hands-off approach while leveraging highly personalized experiences through AI-driven magic. And if you are one of the few skeptics still contemplating the mass adoption of digital assistants and AI, know that the estimated number of people using digital assistants worldwide is projected to grow to more than 1.8 billion by 2021, an increase of more than 80 percent from today—not a bad number given a three-year trajectory. So where does this leave business and strategic planning for the next decade? With the need to strategically plan digital transformation initiatives, creating the ability to adapt, through technology, to better manage and accelerate goals and desired outcomes and create a better end-user experience. It’s that paradigm that also enables businesses of all types to capture and utilize user data for better business, improving everything from customer experience to supply chain management, to creating whole new revenue streams. After all, with increased data creating more detailed single-user-profiles, the better and more engaging the experience—and the higher the potential revenue gains and cost savings. The reality is that end users—whether they be consumers or internal business users—are driving an unprecedented demand for all “things” to be connected in a way never before seen. Moreover, that connection of things must be so highly intrinsic that it functions as the perfect blurred line between all work and life ideals. No longer is there the touted work–life balance; now, it is just digital life. And with that, businesses must embrace an approach far beyond that of their own brand. They must now speak to a global ecosystem wherein all things are created equal and are equally accessible and, most importantly, consistent. If a brand cannot compete with every other brand in the world—regardless of vertical market—the world will ultimately move on without it. Simply put, the IoT, AI, and digital transformation form the new status quo for business and the clear path to business success. And without it? I shudder to think of the consequences.

  • AWS PrivateLink with API Gateway and Lambda Functions

    Security is an essential element of any application especially when it comes to the Restful API layer. Thousands of calls are made daily to share information via Rest APIs, making security a top concern for all organizations in all stages: designing, testing and deploying the APIs. We are living in an era where our private information is more vulnerable than ever before, so it’s very important to protect your APIs from threats and vulnerabilities that keep on increasing daily. In addition to all the guidelines available for building a secure API, an important step is to make your API private. Attackers will not be able to launch any attack on your API if they can’t find it. Exposing your APIs to the public will add a range of security and management challenges that you can avoid. Amazon has introduced AWS PrivateLink so you can choose to restrict all your API traffic to stay within your Amazon Virtual Private Cloud (VPC) which can be isolated from the public internet. Now you can create a private API in your Amazon API Gateway that can only be accessed from within your VPC. It eliminates the exposure of data to the public internet by providing private connectivity between VPCs, AWS services, and on-premise applications securely on the Amazon Network. How can I make my API private? In this blog, we’ll assume that you already have an API created in API Gateway with an endpoint of type Edge Optimized or Regional (publicly available) and a VPC in place. It can be your default VPC. First Step – Create an Interface VPC Endpoint for API Gateway To create an endpoint from the console, follow these steps: In the VPC console page, choose Endpoints then Create Endpoint. For Service Category, select AWS services. For Service Name, choose the API Gateway service endpoint including the region to connect with Type as Interface. In this case, it will be{{region}}.execute-api. Fill in the rest of the information (choose which VPC, Subnets, enable Private DNS and Security Groups) then choose Create Endpoint. Second Step – Make your API private Open the API Gateway console page to see the list of your deployed APIs. In this example, my API is called customer. Its endpoint is of type Edge Optimized so it’s publicly available: To change the type of the Endpoint Configuration, press on the small gear on the upper-right corner and select Private in the endpoint type list: Final Step – Define a Resource Policy for your private API Choose your API Select Resource Policy from the left navigation panel Select “Source VPC Whitelist” Substitute {{vpceID}} with the Endpoint ID that was created above in the first section Press Save You have now successfully made your API private and accessible only from your VPC. Testing your API The easiest way to test the availability of your API is by calling it from a Serverless Lambda Function. Follow the next steps to create a Lambda Function that calls your API from inside and outside your VPC: Navigate to the Lambda console and choose Create Function. Replace the template code in the code section with the following. Replace {{url corresponding to a get request from your api}} with the GET url from the deployed version of your API in API Gateway. 3. In the Network section down, select your VPC, subnets and the security groups required. 4. Save the function then press on Test. You should receive a 200 response with the data returned from your API. 5. Now to call your API from outside your VPC, return to the Network section down and select “No VPC” from the VPC list. 6. Save the function then press on Test. In this case, you should get an error “getaddrinfo ENOTFOUND” which means that the Lambda could not find your API. Integrating PrivateLink with API CZAR API Czar ( is one of the tools that you can use to create an enterprise grade API that runs on AWS API Gateway and Lambda. You can deploy any API created in API CZAR on AWS API Gateway by simply choosing “AWS Lambda/API Gateway App” in the packaging option when you’re packaging and deploying your application, and then follow the instructions in the README file generated. Now with the PrivateLink feature, you can choose the deployment to be private on API Gateway by providing CZAR the Endpoint ID. API CZAR will deploy the API on AWS, configure the endpoint to be PRIVATE, and configure the Resource Policy to be accessed only from the needed VPC—all this in one command. The steps to deploy a private API on AWS API Gateway from API CZAR are as follows: Choose “Package and Deploy” from the options on the needed API as in the following figure. 2. Choose “AWS Lambda/API Gateway App” from the Packaging Options List, then fill out the required fields. 3. Press Next, generate the API and download the application as a zip file. 4. Follow the instructions in the README file generated with the app to deploy the application on AWS API Gateway. After deploying your application to API Gateway, you can test your API using Lambda functions as in the previous section.

  • Defining the human element of Digital Transformation

    Why IT continues to struggle with a new world order All buzzwords aside, Digital Transformation has now become something that is very real—a new must-have approach to business driven solely by the demands of end-users and consumers for a more connected experience—regardless of the vertical market. In fact, Digital Transformation has become such a driving force, a recent study has shown that a staggering two-thirds of CEOs within the Global 2000 have Digital Transformation as a key strategic component of their corporate strategies. However, with so many companies embracing this new change and model, many struggle with what it actually entails—many times it’s the human element that’s the sticking point. Simply put, Digital Transformation is the path forward in addressing our increasingly complex world—a world that is continually inundated with new applications and platforms and, as a result, an ever-growing amount of data that must be managed. Pair this with the rate at which all of these things are being introduced and the complexity and need for change becomes evident. The irony in all this: all of these challenges are created by the human element. The challenge is not so much around introducing new technologies, it’s more about addressing the people, the past practices of IT, and the challenges associated with legacy systems and processes. Luckily, those elements are also easily defined. First, there is simply the matter of bandwidth. It has been my experience that for almost every company I’ve worked with on a Digital Transformation project, the sheer amount of work that is waiting—an already long list of to-do’s and projects—is downright staggering. And whether that’s the result of understaffing, available bandwidth, or simply too many demands coming from all departments, asking an IT department to now address a perceived global shift in process management and technological infrastructure, can be just too daunting to consider. Second, there is the IT budget. For most companies, the majority of their IT budget is categorized as an exercise in maintaining the status quo. Keeping legacy systems running and addressing the day-to-day needs of all departments to ensure business continuity will, ultimately, supersede the want and desire for perceived change. Allocating new budget for new initiatives outside of regular business can be perceived as either an exercise in excessive spending, or in taking budget away from something that is needed. Third, there is the perception of IT versus the reality of IT. When embracing and enacting digital changes within a company, the expertise needed to implement such things usually falls far outside the internal skillsets of the IT department. And as a staunch advocate of IT departments, the skillsets should fall outside of their expertise as they are not within the scope of what IT does. Furthermore, they should not be chastised for not having those skillsets. Gone are the days where IT is expected to be able to do everything technologically related. The world is far too complex for those types of expectations. Finally, there is the fear of the unknown. Digital Transformation is a highly complex animal—one that if done right can impact every aspect of a company and its legacy practices. Pair that with the profound need and demand for innovation and embracing an entirely new way of thinking, and the desire to tackle such a realm of uncertainty can be met with hesitation, to say the least. After all, the human element of adoption and consensus has a far greater implication than the technology that drives it. After all, if given the choice, aren’t incremental changes in business process far safer in comparison? So, if the meaning of Digital Transformation is defined (that’s the easy part), it’s the approach to transformation itself that requires better definition and parameters. The good part in all of this is that too can be easily addressed. As mentioned above, bandwidth and skillsets are always in high demand. But those elements can be resolved with the right partners, technologies, etc. The greater issue is that of the budgets and uncertainties that accompany Digital Transformation. For me, my approach is twofold in helping companies with digital efforts. The first is defining something that is tangible. A digital journey is just that—a journey. There is no need to boil the proverbial ocean on the first day. By creating an approach centered on minimum viable projects (MVPs), small and incremental changes can be made without disruption to the company as a whole. It also means that adoption can be more easily managed as the impact of change is not so great that it causes people to lose sight of their respective comfort zones. As for the uncertainty around Digital Transformation—there’s no need for hesitation. Innovation doesn’t have to be stressful; in fact, it should be the opposite. Innovation should be exciting, engaging, and breathe new life into all departments involved. The best way to do that is to embrace a design-thinking methodology. We all know that big companies can be cumbersome by nature. Breaking that natural state by utilizing a start-up approach to tackling new ideas and implementations can go a long way. It also naturally tackles other aspects of the adoption challenge. When people are engaged, excited, and see small yet important projects come to fruition, the road and vision forward is further embraced. With the human element of Digital Transformation solved, now comes the next challenge—what technology to embrace. But that is for another day…another blog.

  • Why outsourcing Cloud Security is in the best interest of everyone

    Companies are continuing their ongoing migration to the cloud. If anything, the pace of this migration is accelerating. This is seen in two main areas. Firstly, companies are moving their custom applications off their own infrastructure and onto hosted platforms such as Amazon AWS and Microsoft Azure. Secondly, many business units are taking advantage of existing Software as a Service (SaaS) applications to solve problems quickly, rather than rely on IT to build them a custom solution. These changes improve customer agility, but add new challenges for IT security and, in particular, the security of data. The challenges in securing cloud applications is very different. There is no longer a perimeter to defend in-depth as applications are scattered and geographically diverse. There is not even a known set of applications to secure, as in many cases business units are making use of SaaS applications without IT being aware of it. Finally, the security model of the infrastructure providers depends on a model of shared responsibility, where the provider and the customer have different but interrelated responsibilities. With this new paradigm, IT no longer has direct access to the physical environment as it once did. This is particularly true of SaaS applications where the vendor only provides minimal access to functionality in a shared environment, typically by a combination of user interface and API access. However, important corporate data, much of it regulated or restricted, now resides in these environments within applications that can be very opaque to the end user, and where the rules around data storage and protection may be poorly defined. With these changes, we are moving from an environment where security meant locking down the perimeter, to a model where data security is paramount. This new model needs to be looked at from two different perspectives: that of the technology that is needed, and that of the people-related aspects, the policies and governance that surround the management of cloud-based applications and data. The technology itself—the tools that provide ways to control, monitor, and protect information assets based upon the rules decided under governance/policy—is in fact the easier of the two to come to grips with. There are new categories of tools that allow an organization to come to grips with Cloud Security management, such as cloud-based identity and access management (IAM), and cloud access security broker (CASB). The tools can provide capability in several areas, including identity management in the cloud (IAM), authorization and authentication for cloud services (IAM), data loss prevention (CASB), malware prevention (CASB), anomaly detection (CASB), and cloud application usage risk mitigation (CASB). However, the more complex part of Cloud Security is coming to grips with the policies and processes needed to manage these new tools. Many capabilities will need to be developed, along with ongoing operational management of the new tools and governance processes. These will include ongoing management of the processes, validation against regulatory standards, and day-to-day operational management. These are the items that need to be taken care of so that risks are mitigated, compliance and regulatory issues are enforced, threats are managed in real time, alerts are monitored, and reporting is timely. And, of course, there are the usual issues related to building out new capabilities: the staffing scenarios and financial implications that coincide with the management of Cloud Security. For smaller organizations, attracting someone (or multiple people) to the organization can be a challenge, along with the difficulty of retaining employees once they have been trained in skills that are highly in demand. For larger organizations, the financial aspects of calculating staffing costs versus returns need to be considered. Even when finding and retaining the right individuals with the right skill sets is accomplished, the financial aspects that apply can soon come into question. It might be difficult to justify the costs of building the new team if there are more effective options available via outsourcing. Delegating these Cloud Security challenges to a managed services company can start to make a lot of sense. Experts who have the knowledge and pedigree to tackle modern Cloud Security challenges head-on by leveraging the collective experience gained through simultaneously monitoring a multitude of companies, bring with them a plethora of benefits with little to no downside. Aside from the monetary aspects of predictable operation costs (moving CapEx to OpEx), professional managed service providers can also immediately implement everything from best practices to delivering automatic notification and implementation of regulatory change—because it’s their job. So, in the end, why is outsourcing your Cloud Security in the best interest of everyone involved? The answer is clear. Kyle Watson

bottom of page