60 results found
- Cleaner Microservice Orchestration With Zeebe+Lambda
Brian McCann, Software Engineer In this post I’ll talk about Zeebe. Zeebe is a source available workflow engine that helps you define, orchestrate, and monitor business processes across microservices. I’ll show you how to quickly get a full application that leverages Zeebe and Lambda running on AWS. I’m piggybacking off a post by the company’s co-founder in which he showed how you can use Zeebe with Lambda. You may want to start with that one first for a general overview of Zeebe and how it fits in with Serverless, then if you’re interested in quickly getting all the components running on AWS this post will show you how. If you embrace AWS Lambda or are considering adopting it you may see Lambda as an alternative to Kubernetes and prefer the simplicity, reliability, and out of the box scaling that the Serverless way offers. I was motivated to write this for that type of developer. For Zeebe to work we need something running all the time so we can’t be fully “serverless”. My concession is to use ECS to bridge the gap. ECS is a simpler and cheaper alternative to Kubernetes when you’re getting started with containers. If you are a serverless first developer you or your organization might not want to take on the overhead of running and learning k8s if almost all your workloads work fine on Lambda anyway. Zeebe is a super powerful tool that allows you to compose your Lambda functions to build arbitrarily complex stateful applications. Today I’ll show how to get up and running quick with Zeebe and Lambda using ECS. This is what we’ll make Use Case? In case you don’t know about Zeebe I’ll try to describe what it does by describing a use case. A little while ago I was working on a backend that would allow customers to put a down payment on a vehicle then go pick it up at a dealership. The whole process wasn’t too complicated but did involve 5–6 discrete steps involving external API calls. Any of those steps had a number of potential failure paths and based on the outcome of each step you would want to take a different action. On top of that there are different types of errors. There are errors due to conditions in the physical world like a customer putting their address in wrong and there are technical errors like network outages or a bug in the code. You can imagine how the number of scenarios grows exponentially each time you add a step. General purpose programming languages on their own are great at a lot of things, Orchestrating a complicated sequence of steps is not one of them. Using event driven architecture can help us manage emerging complexity by separating business logic into individual microservices whose isolated behavior is simpler to reason about. What we lose is end to end visibility of the business process, so that debugging is a challenge and refactors are scary. This is where a tool like Zeebe, or its predecessor Camunda can help. These tools let us represent our processes as diagrams that are executable. Here is an example: One of the main advantages is that our business logic is now organized into manageable and isolated components. In addition, and in my opinion most importantly, this strategy separates the concern of Orchestration from everything else. This allows us to write functional style microservice components that are easy to read and refactor. All of our orchestration is managed by a dependable and tested framework, and the orchestration logic is cleanly separated as opposed to peppered throughout our microservices (as tends to happen). The boxes with gears are “service tasks” which means they represent pieces of code external to Zeebe that execute. The process engine (Zeebe) is responsible for executing the instructions and will invoke the code we’ve written where specified. In today’s example the service tasks (little boxes with gears ) will be implemented as Lambda functions. There are a lot of other powerful features exposed by BPMN (business process modeling notation), the ones pictured above are just a few. We need a way for Zeebe to trigger the Lambda functions because Lambda functions by design are dormant until you trigger them. We will use the code provided by the zeebe-lambda-worker to create a link between the orchestrator (Zeebe) and the business logic (Lamba). The worker will listen for events on the Zeebe broker, pick up work tasks as they come in, and forward the payload to Lambda. The original post covers how all this works. Here is what I’m adding How to spin up an ECS cluster and run the zeebe-lambda-worker on it How to use an IAM role for authorization instead of AWS credentials Here are the steps involved in this walkthrough: 1. Get set up with Camunda Cloud Sign up for a Free Camunda Cloud trial account Launch a Zeebe instance 2. Use the Serverless Framework to deploy several Lambda functions and an IAM role 3. Get set up on ECS Create an ECS cluster using a setup shell script Deploy the zeebe-lambda-worker to ECS 4. Test that everything works end to end Sign up for Camunda Cloud Camunda and Zeebe are the same company. Zeebe is their newer product and Camunda Cloud is their managed cloud offering which happens to run on the newer Zeebe engine. It’s confusing, I know, probably has something to do with marketing as Camunda is an established and fairly well known brand. Getting set up with a development Zeebe instance is fairly straightforward. This post from the zeebe blog walks you through the steps. The whole thing is a good read but you can skip everything after the create a client part if you just want to get up and running. We’ll need the broker address information and the client information later. Once we’re set up with a free Zeebe instance we can deploy the Serverless framework project. Deploy the serverless project prerequisite: You need to have an AWS account + IAM user set up with admin privileges. This is a great guide if you need help. clone the repo https://github.com/bmccann36/trip-booking-saga-serverless cd into this directory trip-booking-saga-serverless/functions/aws In the serverless.yml file update the region property If you’d like to deploy to a region other than us-east-1. I have made barely any changes to this from the original original forked repository. I removed the http event triggers from the functions since we won’t need them for this. This will make the project deploy and tear down faster since there are no API gateway resources. I also defined an IAM role that will give our zeebe-lambda-worker (which we have yet to deploy) permission to invoke the Lambda functions. The policy portion of the role we’ll give to the ECS lambda-worker Run the command sls deploy -v to deploy the Lambda functions (the -v gives you verbose output). Connecting Lambda + Zeebe with ECS The last step to get our demo application working end to end is to deploy the zeebe-lambda-worker. The source code can be found here. I also forked this from Zeebe and made a small change which was to add support for IAM role authorization. Since our ECS task will assume a role that gives it permission to invoke the Lambda functions it needs to, we do not need to supply AWS credentials to the worker code. I added an if / else statement so that a role will be used if no AWS credentials are supplied. This works because the AWS sdk sources credentials hierarchically. The role referred to in this snippet is the one we created earlier when we deployed the Serverless framework project. If you prefer to use accessKey/secretKey make sure those keys are not the same keys you use as an admin user to work with AWS. You should create a role or user with credentials that allow the least permission possible. In this case we only need permission to invoke a few specific Lambda functions. I have already packaged the zeebe-worker as a Docker image and pushed to a public repo. The ECS setup which we will get to in a moment will pull this image. If you want you can modify the worker source code and/or build the image yourself and publish to your own repository. You’ll need to package it as a .jar first as it is Java code. Prerequisite: ecs-cli installed on your machine. You can get it with aws binary or homebrew If you run into issues at any point look at the AWS walkthrough for reference. I mostly followed the steps outlined there to do this with a few minor modifications. I have chosen to use a launch type of EC2. Fill in some configuration To make it easy to configure your ECS cluster I’ve included some configuration template files, all of which have the word “SAMPLE” in the file name. To use them, cd into zeebe-event-adapter/ in the trip-booking-saga repo. Then copy all the files with the word SAMPLE in front and save them with “SAMPLE” removed from the file name. This will cause them to be ignored by git. I’ve configured the .gitignore to ignore these files so that you or I don’t accidentally commit sensitive credentials to source control. Next, populate the values in each file as outlined below. (file) aws.env AWS_REGION= --- (file) Camunda.env ZEEBE_CLIENT_CLOUD_CLUSTERID= ZEEBE_CLIENT_CLOUD_CLIENTID= ZEEBE_CLIENT_CLOUD_CLIENTSECRET= --- (file) ecs-params.yml ... # modify this line task_role_arn: arn:aws:iam:::role/zeebe-lambda-worker-role ... The values we supply in Camunda.env will be used by the zeebe-lambda-worker to connect to the Camunda Cloud Zeebe instance we created in the first step. Once we’ve filled in the values in each of these files we’re ready to deploy. I’ve combined all the steps into one bash script so everything can be deployed with one command. Just run the setupEcs.sh script and supply your region as the only argument. i.e. bash setupEcs.sh us-east-1 If you run into problems at any step you may want to try running the commands in the script one by one manually, or refer back to the AWS tutorial. # note you may get this warning INFO (service aws) was unable to place a task because no container instance met all of its requirements. Don’t get impatient and kill the process, this just means your infra isn’t quite ready yet, it should resolve on its own. If you are used to working with Kubernetes the ECS terms will be confusing. An ECS service is not the same as K8s service at all and has nothing to do with networking. Instead in ECS a service just defines that you want to keep a “task” (which is basically a container with some config attached) running. It’s the same idea as a service you’d set up on a Linux server. Test that it works *disclaimer The main purpose of this post is to show how to easily deploy the zeebe-lamba-connector to work with Lambda functions so I won’t cover Zeebe usage much. If you want to learn more about Zeebe check out Bernd Rücker’s posts or try the zeebe quickstart. Probably the simplest way to deploy and start the workflow is with the Zeebe Modeler tool which you can download here . You can also use the command line tool zbctl. Once you have the modeler downloaded, just open the trip-booking.bpmn file (in zeebe/aws directory) in the zeebe modeler. Use the upload and play buttons to upload the workflow to the cluster and start an instance of the process. If the process instance worked successfully end to end you will see the completed instance in the operate console. If we navigate to ECS, select the service we’ve deployed and click on logs, we can see the successful results from the Lambda functions being passed back and forth through the zeebe-lambda-worker. Also if we navigate to the Lambda console, select one of the Lambdas involved in our workflow then select monitoring → view logs in cloudWatch, we can see the logs of the Lambda function itself. Cleaning up ECS resources when you don’t want them anymore take down the service ecs-cli compose service rm — cluster-config zb-cfg remove the whole cluster ecs-cli down — force — cluster-config zb-cfg Closing thoughts I am curious to hear what others think about pairing Zeebe with Lambda. Does the combination make sense? What would you do differently? Also for anyone who uses or has used step functions I am curious to hear how this solution compares. I had not used ECS much before this and I was pleasantly surprised by how easy it was to provision and use. It seems like they’ve added a lot of features since the last time I looked at it to try to keep pace with Kubernetes. I also like how they use a docker-compose file and docker like commands so you can work with your containers in the cloud more or less the same way you would locally. My one big concern and hesitation on getting good at ECS is that its clearly not the industry standard. Practically as engineers it just makes way more sense to use what everyone else is using because thats what everyone already knows. I think Zeebe is a very promising new product, I also think they have their work cut out for them because there are a lot of competitors in this space. Some popular tools that fill the same Niche are Netflix’s Conductor and AWS’s own Step Functions. They face competition not just from competitor frameworks but also homespun orchestration solutions using event buses and other lower level orchestration/choreography tools. I think that a lot of orgs and developers don’t even realize that there is a tool that can help them glue together or orchestrate microservices. In a lot of cases people tend to write this code themselves. In many cases it is a conscious decision because they feel that adding in another Library just creates additional complexity, makes their app harder to understand for new developers, or perhaps even limits what they can do. I totally get that concern as I’ve had these problems myself. For my personal development and collaboration style I think Zeebe makes a lot of sense for a broad range of tech problems. I hear about a lot of people reaching for Kafka to achieve EDA. I have nothing against Kafka and I think it is a technically impressive piece of engineering. An experienced dev can probably get up and running with it in a day but they definitely won’t be using it optimally or even getting benefits from it for a long time because it is so complicated with tons of use case specific config required. Kafka is more low level than Zeebe and therefore applies to a broader range of use cases. However I think that if you are just using Kafka as an event bus to drive a collection of choreographed microservices, Zeebe may offer a simpler and actually more powerful solution. With Kafka things like event replay, retry strategies, and monitoring solutions at a business process level, are capabilities we need to build ourselves. Zeebe is appealing to me because they’ve found a way to generalize these needs. This allows us as developers to simply use their API to endow our apps with these advanced features and ultimately solve a business problem.
- Using Active Directory with Netskope (Part 2)
In Part 1, we discussed why you would want to integrate AD with Netskope, the AD integration tools Netskope offers, and briefly touched on the Netskope’s REST API. In part 2 we are going to dive deeper into using the REST API to add additional attributes and provide a sample Powershell script that provides additional automation capabilities. Keep in mind that this document is not intended to replace the existing Netskope documentation, and will not cover the implementation details for the Netskope tools, as that is already well defined. Netskope REST API and Directory Importer As was mentioned in part 1, not only does the Netskope REST API offer a variety of options to query data from Netskope, but it also provides the ability upload custom user attributes from a CSV or TXT file as long as it is in a specific format (Netskope documentation on file format and limitations can be found at Administrators > Administration > REST API > Add Custom User Attributes). While this sounds eerily similar to what the Directory Importer is capable of (with a little additional configuration), using the REST API provides a few distinct advantages. Let’s take a real world example. Your web proxy uses an employee’s email address as a user key in order to correlate traffic ‘events’ to a user, and you have configured the On Premises Log Parser (OPLP) to upload those events to Netskope. To add some additional user information to Netskope, you configure directory importer to use the mail (email) field to correlate when uploading all of your additional user attributes. But what happens when you have data from a different source, such as Forward or Reverse Proxy data, and the application is not using the email field as the user identity? (Perhaps it uses the employee ID from your HR system.) In this scenario, the events generated from that source are not going to contain any of the additional user information that was pulled from AD via the directory importer, because the user keys do not correlate with each other. That’s where the REST API comes in. As we mentioned in part 1, you can have duplicate rows in the CSV file, one for each possible key that can be matched against, which allows the uploaded attribute data to correlate all events, even when they are using different keys. Furthermore, with the REST API based solution, you can pull user information from sources besides AD (e.g. an HR system) and include it in your file to be uploaded. Additional User Attribute Script Netskope provides a bash script that can be used to upload a CSV or TXT file to your tenant. The script works as designed, but when trying to implement this as part of a solution a few issues need to be considered. It is likely that you will have already provisioned a Windows instance for the Directory importer. But since the Netskope provided script is a bash shell script, you need to provision a Linux instance in order to run the script. This is extra effort and extra cost. You will probably need to work with your Active Directory team to obtain an export from AD, format it, FTP it over to the Linux instance, and then run the shell script to upload it to Netskope. Parts of this can be automated, but as a solution it is cumbersome and potentially fragile. In an effort to simplify the solution, we have created a script (attached below) that performs the same functionality as Netskope’s bash script but performs it in Windows PowerShell. This means that you can run it on the same Windows instance as the Directory Importer, where it will extract the data from AD, build a correctly formatted CSV file, and upload it to your Netskope Tenant. Let’s go through the various parts. exportADAttributesToCSV This function is responsible for querying AD and creating the corresponding CSV file in the appropriate format in preparation for its upload to the Netskope tenant. There are lines in here that you must configure and they are as follows: Properties to pull from AD: change the values in the parentheses to your choosing. You should limit this to 15 attributes as that is Netskope’s current limit for custom attributes Names of the custom attributes to be uploaded: The first sw.WriteLine seen below is where the header row is written into the CSV file. These will be the names displayed within SkopeIt. Also, keep in mind the first column will be used as the key to correlate on (i.e. – mail). ManagerDisplayName function: This block will populate manager display name with an empty string if empty or null and if not, will either query AD for each user to get their displayName (heavy performance impact), or attempt to parse the manager’s name out of the canonical name. This can be removed if managerName is not one of the attributes pulled Adding records to the CSV: This section is where the actual records are added to the CSV file under the header row. This will need to be altered if you change the attributes in the header to match in order and count. If you have applications that use different keys, this is the place to add additional $sw.Writeline statements to add more than one record per user. jsonValue This function is responsible for parsing the responses from Netskope, trimming additional whitespace and returning the value at a given spot in the JSON body. getToken GetToken is responsible for using the NSUI and NSRestToken variables and calling the Netskope tenant to get a token for subsequent REST requests. This token will be used when uploading the custom attributes to the tenant. uploadFile This appropriately named function is responsible for the chunking (breaking into smaller pieces to upload) the CSV file and uploading it to the Netskope tenant. This is primarily a port of Netskope’s script over to PowerShell. getManagerName GetManagerName is responsible for attempting to parse the manager’s full name out of the manager’s canonical name and will require some alteration depending on the way the canonical name is formatted within AD. Currently, it is set for ‘CN=LastName\, FirstName MiddleName,OU=global,DC=mydomain,DC=com’. This function is an effort to reduce the number of requests made to AD and increase the scripts performance as it will not have to wait for multiple responses from AD. If managerName is not an attribute you intend to upload, or you elect to go the route of querying AD for the manager’s displayName, you can do that within the exportADAttributesToCSV function. Main Block The main block is responsible for allowing insecure HTTPS connections (the Netskope script also does this, but PowerShell has an issue with it), some timing logic for the logs and calling the various functions appropriately. Configuring Windows for Additional Attribute PS Script The script is a modified version of Netskope’s additional attribute shell script that is designed to run in Microsoft PowerShell. It requires the use of PowerShell v4.0 or higher, enabling the AD module for PowerShell in Roles and Features, and installing curl as PowerShell does not have native functionality to make multipart/form-data REST requests. Installing AD Module for PowerShell To install the AD Module for PowerShell, you must log on as an admin and follow these steps: Open Server Manager and select ‘Manage’ in the upper right corner and then ‘Add Roles and Features’ from the drop-down Select ‘Role-based or feature-based installation’ and hit next Select local server and hit next Jump to ‘Features’ in the left pane and scroll down to ‘Remote Server Administration Tools’ and expand Expand ‘Role Administration Tools’ and ‘AD DS and AD LDS Tools’ Select ‘Active Directory module for Windows PowerShell’ and hit next Confirm the installation and complete Installing Curl To install curl, you must log on as an admin and follow these steps: Download the latest curl installer for your windows environment from https://curl.haxx.se/download.html Install curl and accept all defaults Locate where curl executable is installed (likely C:\Program Files\cURL\bin if all defaults selected) and save this for the script variable configuration Creating Encrypted Password File To keep the AD user password secure, the following is a process to create a secure string and save it in an encrypted file. Since this process uses the Windows Data Protection API (DPAPI), you must do the following as the user that will be used to run the script. Failure to do so will result in the inability of the script to decrypt the password. Log into server as user that will be running the script Open powershell window and type “(Get-Credential).Password | ConvertFrom-SecureString | Out-File “ where is where you would like the encrypted password file to be stored. e.g. – (Get-Credential).Password | ConvertFrom-SecureString | Out-File “C:\PSScripts\ExportADUsers\SecurePW.txt” Save for script configuration Configuring Script Variables The following variables should be configured in the script to match your environment: DEBUG – toggle to increase logging when script issues are encountered. NSUI – domain name of Netskope tenant where attributes are to be uploaded NSRestToken – token used to make REST calls to Netskope tenant. If this value is changed on the tenant, then it must be updated in the script. maxChunkSize – if the AD export file is larger than this value, the file will be divided into chunks of this size and one chunk smaller or equally sized. The chunks will then be uploaded to the Netskope tenant in multiple parts. Recommended size is 5MB. path – path where scripts and files exist. Must end in /*.* csvFile – path and name of csv file (e.g. – “$path\ALLADUsers_$logDate.csv”) logFile – path and name of log file (e.g. – “$path\UploadCSVtoNS.log”) ADServer – AD server name (e.g. – ”mydomain.com”) searchBase – lowest level in AD and where queries are performed (e.g. – “OU=Global,DC=mydomain,DC=com”) user – user used to query AD. Must have read access to AD and be the same user that created the encrypted password file pwFile – location of the AD user encrypted password file. from the previous section Scheduling Script via Task Scheduler To keep up with an ever-changing global directory, this script should be automated to run to keep the user data on the Netskope tenant up to date. It is recommend to run daily using Windows Task Scheduler. If this needs to be changed: Log into the server as an admin Open Task Scheduler under Administrative Tools Browse under Task Scheduler Library – Netskope to find ‘Run AD Upload Script’ Right-click and select properties Under the tabs, these options should be set General When running this task, use the account that has access to AD and that you used to create the password file. Run whether user is logged on or not – enabled Run with highest privileges – enabled Triggers – One trigger should exist to start the script at a given time and it should be scheduled to terminate it if it goes over an allotted amount of time. Begin the task – On a schedule Set timing to preferred period (e.g. – Daily @ 0800 GMT) Stop task if runs longer than – should be set to less than the frequency your script is set to run. Enabled – enabled Actions – Should be set to start a program Program/script – path to PowerShell executable (e.g. – C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe) Add arguments – need to add script to execute once PowerShell is opened using ‘File’ argument and absolute path to script (e.g. -File “C:\PSScripts\ExportADUsers\NS_UserAttr_Upload_v1_0.ps1”) Conditions – None Settings Allow task to be run on demand – enabled Stop the task if runs longer than – enabled If the running task does not end when requested, force it to stop – enabled If task is already running, then the following rule applies – Do not start a new instance Maintenance Manual periodic cleanup of log files and csv files will need to be performed in the script directory so the server does not run out of storage space. Our recommendation is to clean files up at least once a month. Alternatively, you could implement those capabilities in your version of the script, or have a separately scheduled Powershell script to perform cleanup. To access the script, please click here. Paul Ilechko | Senior Security Architect Andrew Hejnas | Cloud Security Specialist & Solutions Architect
- API Czar, CodeSoju at the official Angular and Node Meetups in New York
A month ago, I had the pleasure of presenting API Czar & CodeSoju at the official AngularJS and Node.js Meetups, which were founded in 2010 & 2012 in New York City. API Czar is a rapid API development tool that enables teams to generate and deploy best practice based APIs, while CodeSoju is an open source initiative that provides a set of standards, best practices and tools to help developers during all phases of the development lifecycle. In these 2 presentations, I talk about how API Czar & CodeSoju work as well as the importance of the growing community of developers that support them. Both tools spiked a great discussion from the two communities, mainly because they solve everyday challenges they face. API Czar & CodeSoju derived from a key question we ask ourselves after completing any project: How can we maximize our team’s efficiency on large scale projects? For more information, please visit apiczar.io and codesoju.io.
- Cedrus Digital | Customized Digital Transformation
Line separator DIGITAL TRANSFORMATION SOLUTIONS DONE RIGHT Your company is unique; your technology solutions should be too. Our expert consultants advise and aim to tailor our solutions to your specific business needs with unprecedented speed and agility. Why trust your innovation to anyone else? WHAT WE DO FEATURED PARTNERSHIPS OUR VISION To deliver our customers innovative solutions with unparalleled levels of functionality and sophistication, bridging the present into the future, while bringing unprecedented operational efficiencies into their business. WHO WE ARE 82 % Repeat Customers 164 % Customer base growth over the last 4 years 44 % CAGR for the past 3 years About Cedrus We are a young, imaginative, diverse company, deeply rooted in successfully solving large business problems for the past 25 years. Our employees, partners, and customers share our zeal for excellence. Learn more about what makes us special. MEET CEDRUS Our Expertise We tame cutting-edge technologies to fit your business. Our extensive certifications, internal investments in assets and IP, and our experience from hundreds of transformational projects inform our wisdom as we take on new challenges. OUR EXPERTISE HOW WE'RE DIFFERENT We expertly guide you through discovery and design, architecture and development, and production and support, with an approach perfectly tailored to your unique business needs. Realizing your business' potential has never been easier! Learn about our offerings at no commitment, risk, or cost. WHAT WE DO See us in action! Get to know our work through some real-life examples. Take a look at some use cases, demos, and solutions that others are already implementing . USE CASES Grow with us! WE ARE HIRING! Want to be part of a dynamic, creative team? Want to join a company that will invest in you? Learn more about joining our team. JOIN US OUR PARTNERSHIPS Our close partnerships with some of the industry's top performers will give you the best experience in using their groundbreaking, industry-defining tools and platforms. By keeping an open mind, a scientific curiosity, and an objective outlook, we accommodate and adapt our recommendations to the context of your business. LEARN MORE WHAT WE'RE UP TO NOW View More WHAT PEOPLE SAY Media and Entertainment Software Provider We were impressed with Cedrus' ability and assets to support us during our Cloud migration. Cedrus is not a typical systems integrator- they understand the underlying business challenge and help resolve it through cutting edge technology practices. Their unparalleled expertise in implementing and deploying APIs/microservices to the Cloud enabled us to quickly and smoothly transition our software offerings to AWS. READY TO GET STARTED ? Get in touch with us! We're thrilled to answer your questions and help you define a vision. Contact Us
- AWS | Cedrus Digital
AMAZON WEB SERVICES Leveraging our years of AWS experience to guarantee secure, scalable, cost-optimized AWS Cloud Adoption. Expand your business value through Cloud adoption, increase your development agility, implement operational excellence, and leverage cutting-edge technologies such as IoT and AI. OUR AWS EXPERTISE We also deploy on Amazon. Cedrus is a trusted AWS Advanced Consulting Partner. Our long-term partnership is supported by our significant number of AWS certifications and accreditations. We've been recommended by AWS to deliver secured IoT, digital innovation, and modernization projects. Cedrus was featured in AWS' “This is my Architecture” customer testimonial video series (see below). Red Hat OpenShift OUR SUCCESSES WITH AWS Cedrus' Work on Home IoT Cedrus developed a smart home solution for a major American energy company using AWS services. A gateway device, connected via a serverless solution to a progressive web app, tracks energy usage and controls smart home devices. The customer can also use voice control through Alexa. AWS Greengrass allows the gateway to run a Node.js application, and offers the ability to perform over-the-air updates on the device. Technologies used included AWS IoT, AWS Greengrass, AWS IoT Analytics, API Gateway, AWS Lambda, and Cognito. The team also utilized APIs exposed by OpenHab, a vendor and technology agnostic open source automation software for the home, with an active community. This work was featured at an AWS workshop, where AWS showcased how partners use their technology to interested clients. Cedrus' Work on Industrial IoT Cedrus made a smart home integration for peoples' home devices for a leading American energy company. Customers signed up through their energy provider, were provided an Alexa to control smart home devices, and volunteered their usage statistics and control over their smart home devices. Cedrus built a dashboard for the company to see and control the devices and usage. This solution, now managed by the company, is mutually beneficial: customers save money and energy, and the company does not have to build additional power plants to support unnecessary power usage. Cedrus Featured in the AWS Blog Cedrus reduced FADEL's API generation time from days to less than 30 minutes using our API Czar tool, which features Amazon API Gateway and AWS Lambda. CLICK HERE TO READ THE ARTICLE Our work on AWS Privatelink with API Gateway and Lambda Functions CLICK HERE TO READ THE ARTICLE OFFERINGS OUR AWS Amazon MQ Migration Migrate your on-premise messaging infrastructure to Amazon MQ. . We are an . Learn more here Amazon MQ Partner Cloud Native Security Framework An offering that assists customers in composing scrum teams that follow agile, pair programming, and test-driven development best practices to deliver cloud-native products Art of the Possible A workshop that brings people together from different departments to formulate new solutions by identifying key business challenges and recommending technology-based solutions that address these challenges Well-Architected Framework Move your existing workload to AWS, leveraging cloud best practices Cloud Migration Enables AWS customers to modernize and move legacy monolithic applications into the cloud, leveraging microservice principles Internet of Things A suite of consultative services that help customers understand how they can legerage AWS IoT services and build a platform to improve their business in new ways Machine Learning Assists AWS customers in developing AI-enabled solutions such as chatbots and/or smart unstructured data intake using AWS Lex, SageMaker, and/or Alexa skills DevSecOps Transformation Synergize development, IT operations, and security teams Blockchain Assists AWS customers in leveraging a hyperledger and quantum ledger database to implement immutable data stores OUR AWS SPECIALTIES Here are some of the ways we can help you accomplish your goals with AWS technology Cloud Native Development Monolithic to Microservices Apache Kafka DevSecOps Automation Data Cloud Migration Internet of Things Serverless and APIs Containers and Orchestration Cloud Security Operational Excellence Mobile Development Hybrid Cloud Strategy AI & Machine Learning Fullstack Development Application Modernization Managed Services READY TO GET STARTED ? Get in touch with us! We're thrilled to answer your questions and help you define a vision. Contact Us
- CLOUD NATIVE | Cedrus Digital
NATIVE CLOUD Moving to the Cloud is the cornerstone of Digital Transformation. Building Cloud Native products gives you more flexibility, lowers costs, and is critical for organizations to move at a competitive speed as they grow and scale. We use our knowledge of building Cloud Native products, architectural best practices, and our assets to assist you in your journey to the Cloud. OUR CLOUD PARTNERS OUR PROCESS Our Cloud Native experts have extensive experience in your ongoing, iterative journey to the Cloud. Our process will guide you through the process to Cloud maturity and agile implementation. See Our Offerings OUR WORK With 20+ years of successfully delivering enterprise-grade solutions to some of the world’s largest companies, Cedrus is the perfect partner to tackle the most complex enterprise challenges and guide you through your Cloud journey. Our extensive partnerships and expertise with leading Cloud providers allows us to deliver scalable solutions that span Public, Private, Hybrid, and Multi-Cloud. Your Cloud transition introduces a multitude of new opportunities to improve not only the technology you use, but the way your business operates and solves problems. American Commercial Barge League (ACBL) needed to upgrade from an in-house solution that no longer met their growing logistics and technology needs. Cedrus advised in business and technical capacities to diagnose their most pressing issues. We used our years of experience in similar fields to suggest a custom solution that would help them manage logistics through IoT, and expand and enhance their existing software in future-safe AWS technologies. Now, ACBL has a strong, modern infrastructure with the benefits of improved accuracy, efficiency, and user experience. Take a look at their incredible story below! USE CASES See the incredible benefits of moving to the cloud that we've provided to clients in Healthcare, Shipping, Insurance, Banking, and other industries. Explore More Here OFFERINGS CLOUD Assess Cloud Readiness Assessment CI/CD Readiness Assessment Cloud & Architecture Discovery Workshops Organization Security Assessment Measure and Optimize Infrastructure as Code Quickstart and Implementation Workload and Users Onboarding Automation Implementation Policy as Code Quickstart and Implementation AIOps Assessment and Implementation Cost Optimization Assessment and Implementation Cloud Security Assessment and Remediation Application Security Assessment Performance Assessment Cloud Security Monitoring Assessment and Implementation Multi-Cloud Monitoring Assessment and Implementation Proof and Foundation Design Thinking Workshop Project One MVP Implementation Well-Architected Framework Assessment Landing Zone Implementation CI/CD Pipeline Implementation Backup Migration Data Migration Managed Services Application Maintenance and Management Platform Maintenance and Management Cloud Managed Services Workload and User Onboarding Assessment and Automation Implement Cloud Center of Excellence Platform Automation (IaC) DevSecOps Strategy, Framework, and Toolings Data Migration, Data Lake, and Event-Driven Implementation API Strategy & Lifecycle Implementation Application Migration and Modernization IoT Quickstart and Full Implementation BlockChain Quickstart and Implementation Container Strategy and Implementation Serverless Quickstart and Full Implementation PCF Migration More Resources WANT TO LEARN MORE ? Get in touch with us! We're thrilled to answer your questions and help you define a vision. Contact Us