60 results found
- Cleaner Microservice Orchestration With Zeebe+Lambda
Brian McCann, Software Engineer In this post I’ll talk about Zeebe. Zeebe is a source available workflow engine that helps you define, orchestrate, and monitor business processes across microservices. I’ll show you how to quickly get a full application that leverages Zeebe and Lambda running on AWS. I’m piggybacking off a post by the company’s co-founder in which he showed how you can use Zeebe with Lambda. You may want to start with that one first for a general overview of Zeebe and how it fits in with Serverless, then if you’re interested in quickly getting all the components running on AWS this post will show you how. If you embrace AWS Lambda or are considering adopting it you may see Lambda as an alternative to Kubernetes and prefer the simplicity, reliability, and out of the box scaling that the Serverless way offers. I was motivated to write this for that type of developer. For Zeebe to work we need something running all the time so we can’t be fully “serverless”. My concession is to use ECS to bridge the gap. ECS is a simpler and cheaper alternative to Kubernetes when you’re getting started with containers. If you are a serverless first developer you or your organization might not want to take on the overhead of running and learning k8s if almost all your workloads work fine on Lambda anyway. Zeebe is a super powerful tool that allows you to compose your Lambda functions to build arbitrarily complex stateful applications. Today I’ll show how to get up and running quick with Zeebe and Lambda using ECS. This is what we’ll make Use Case? In case you don’t know about Zeebe I’ll try to describe what it does by describing a use case. A little while ago I was working on a backend that would allow customers to put a down payment on a vehicle then go pick it up at a dealership. The whole process wasn’t too complicated but did involve 5–6 discrete steps involving external API calls. Any of those steps had a number of potential failure paths and based on the outcome of each step you would want to take a different action. On top of that there are different types of errors. There are errors due to conditions in the physical world like a customer putting their address in wrong and there are technical errors like network outages or a bug in the code. You can imagine how the number of scenarios grows exponentially each time you add a step. General purpose programming languages on their own are great at a lot of things, Orchestrating a complicated sequence of steps is not one of them. Using event driven architecture can help us manage emerging complexity by separating business logic into individual microservices whose isolated behavior is simpler to reason about. What we lose is end to end visibility of the business process, so that debugging is a challenge and refactors are scary. This is where a tool like Zeebe, or its predecessor Camunda can help. These tools let us represent our processes as diagrams that are executable. Here is an example: One of the main advantages is that our business logic is now organized into manageable and isolated components. In addition, and in my opinion most importantly, this strategy separates the concern of Orchestration from everything else. This allows us to write functional style microservice components that are easy to read and refactor. All of our orchestration is managed by a dependable and tested framework, and the orchestration logic is cleanly separated as opposed to peppered throughout our microservices (as tends to happen). The boxes with gears are “service tasks” which means they represent pieces of code external to Zeebe that execute. The process engine (Zeebe) is responsible for executing the instructions and will invoke the code we’ve written where specified. In today’s example the service tasks (little boxes with gears ) will be implemented as Lambda functions. There are a lot of other powerful features exposed by BPMN (business process modeling notation), the ones pictured above are just a few. We need a way for Zeebe to trigger the Lambda functions because Lambda functions by design are dormant until you trigger them. We will use the code provided by the zeebe-lambda-worker to create a link between the orchestrator (Zeebe) and the business logic (Lamba). The worker will listen for events on the Zeebe broker, pick up work tasks as they come in, and forward the payload to Lambda. The original post covers how all this works. Here is what I’m adding How to spin up an ECS cluster and run the zeebe-lambda-worker on it How to use an IAM role for authorization instead of AWS credentials Here are the steps involved in this walkthrough: 1. Get set up with Camunda Cloud Sign up for a Free Camunda Cloud trial account Launch a Zeebe instance 2. Use the Serverless Framework to deploy several Lambda functions and an IAM role 3. Get set up on ECS Create an ECS cluster using a setup shell script Deploy the zeebe-lambda-worker to ECS 4. Test that everything works end to end Sign up for Camunda Cloud Camunda and Zeebe are the same company. Zeebe is their newer product and Camunda Cloud is their managed cloud offering which happens to run on the newer Zeebe engine. It’s confusing, I know, probably has something to do with marketing as Camunda is an established and fairly well known brand. Getting set up with a development Zeebe instance is fairly straightforward. This post from the zeebe blog walks you through the steps. The whole thing is a good read but you can skip everything after the create a client part if you just want to get up and running. We’ll need the broker address information and the client information later. Once we’re set up with a free Zeebe instance we can deploy the Serverless framework project. Deploy the serverless project prerequisite: You need to have an AWS account + IAM user set up with admin privileges. This is a great guide if you need help. clone the repo https://github.com/bmccann36/trip-booking-saga-serverless cd into this directory trip-booking-saga-serverless/functions/aws In the serverless.yml file update the region property If you’d like to deploy to a region other than us-east-1. I have made barely any changes to this from the original original forked repository. I removed the http event triggers from the functions since we won’t need them for this. This will make the project deploy and tear down faster since there are no API gateway resources. I also defined an IAM role that will give our zeebe-lambda-worker (which we have yet to deploy) permission to invoke the Lambda functions. The policy portion of the role we’ll give to the ECS lambda-worker Run the command sls deploy -v to deploy the Lambda functions (the -v gives you verbose output). Connecting Lambda + Zeebe with ECS The last step to get our demo application working end to end is to deploy the zeebe-lambda-worker. The source code can be found here. I also forked this from Zeebe and made a small change which was to add support for IAM role authorization. Since our ECS task will assume a role that gives it permission to invoke the Lambda functions it needs to, we do not need to supply AWS credentials to the worker code. I added an if / else statement so that a role will be used if no AWS credentials are supplied. This works because the AWS sdk sources credentials hierarchically. The role referred to in this snippet is the one we created earlier when we deployed the Serverless framework project. If you prefer to use accessKey/secretKey make sure those keys are not the same keys you use as an admin user to work with AWS. You should create a role or user with credentials that allow the least permission possible. In this case we only need permission to invoke a few specific Lambda functions. I have already packaged the zeebe-worker as a Docker image and pushed to a public repo. The ECS setup which we will get to in a moment will pull this image. If you want you can modify the worker source code and/or build the image yourself and publish to your own repository. You’ll need to package it as a .jar first as it is Java code. Prerequisite: ecs-cli installed on your machine. You can get it with aws binary or homebrew If you run into issues at any point look at the AWS walkthrough for reference. I mostly followed the steps outlined there to do this with a few minor modifications. I have chosen to use a launch type of EC2. Fill in some configuration To make it easy to configure your ECS cluster I’ve included some configuration template files, all of which have the word “SAMPLE” in the file name. To use them, cd into zeebe-event-adapter/ in the trip-booking-saga repo. Then copy all the files with the word SAMPLE in front and save them with “SAMPLE” removed from the file name. This will cause them to be ignored by git. I’ve configured the .gitignore to ignore these files so that you or I don’t accidentally commit sensitive credentials to source control. Next, populate the values in each file as outlined below. (file) aws.env AWS_REGION= --- (file) Camunda.env ZEEBE_CLIENT_CLOUD_CLUSTERID= ZEEBE_CLIENT_CLOUD_CLIENTID= ZEEBE_CLIENT_CLOUD_CLIENTSECRET= --- (file) ecs-params.yml ... # modify this line task_role_arn: arn:aws:iam:::role/zeebe-lambda-worker-role ... The values we supply in Camunda.env will be used by the zeebe-lambda-worker to connect to the Camunda Cloud Zeebe instance we created in the first step. Once we’ve filled in the values in each of these files we’re ready to deploy. I’ve combined all the steps into one bash script so everything can be deployed with one command. Just run the setupEcs.sh script and supply your region as the only argument. i.e. bash setupEcs.sh us-east-1 If you run into problems at any step you may want to try running the commands in the script one by one manually, or refer back to the AWS tutorial. # note you may get this warning INFO (service aws) was unable to place a task because no container instance met all of its requirements. Don’t get impatient and kill the process, this just means your infra isn’t quite ready yet, it should resolve on its own. If you are used to working with Kubernetes the ECS terms will be confusing. An ECS service is not the same as K8s service at all and has nothing to do with networking. Instead in ECS a service just defines that you want to keep a “task” (which is basically a container with some config attached) running. It’s the same idea as a service you’d set up on a Linux server. Test that it works *disclaimer The main purpose of this post is to show how to easily deploy the zeebe-lamba-connector to work with Lambda functions so I won’t cover Zeebe usage much. If you want to learn more about Zeebe check out Bernd Rücker’s posts or try the zeebe quickstart. Probably the simplest way to deploy and start the workflow is with the Zeebe Modeler tool which you can download here . You can also use the command line tool zbctl. Once you have the modeler downloaded, just open the trip-booking.bpmn file (in zeebe/aws directory) in the zeebe modeler. Use the upload and play buttons to upload the workflow to the cluster and start an instance of the process. If the process instance worked successfully end to end you will see the completed instance in the operate console. If we navigate to ECS, select the service we’ve deployed and click on logs, we can see the successful results from the Lambda functions being passed back and forth through the zeebe-lambda-worker. Also if we navigate to the Lambda console, select one of the Lambdas involved in our workflow then select monitoring → view logs in cloudWatch, we can see the logs of the Lambda function itself. Cleaning up ECS resources when you don’t want them anymore take down the service ecs-cli compose service rm — cluster-config zb-cfg remove the whole cluster ecs-cli down — force — cluster-config zb-cfg Closing thoughts I am curious to hear what others think about pairing Zeebe with Lambda. Does the combination make sense? What would you do differently? Also for anyone who uses or has used step functions I am curious to hear how this solution compares. I had not used ECS much before this and I was pleasantly surprised by how easy it was to provision and use. It seems like they’ve added a lot of features since the last time I looked at it to try to keep pace with Kubernetes. I also like how they use a docker-compose file and docker like commands so you can work with your containers in the cloud more or less the same way you would locally. My one big concern and hesitation on getting good at ECS is that its clearly not the industry standard. Practically as engineers it just makes way more sense to use what everyone else is using because thats what everyone already knows. I think Zeebe is a very promising new product, I also think they have their work cut out for them because there are a lot of competitors in this space. Some popular tools that fill the same Niche are Netflix’s Conductor and AWS’s own Step Functions. They face competition not just from competitor frameworks but also homespun orchestration solutions using event buses and other lower level orchestration/choreography tools. I think that a lot of orgs and developers don’t even realize that there is a tool that can help them glue together or orchestrate microservices. In a lot of cases people tend to write this code themselves. In many cases it is a conscious decision because they feel that adding in another Library just creates additional complexity, makes their app harder to understand for new developers, or perhaps even limits what they can do. I totally get that concern as I’ve had these problems myself. For my personal development and collaboration style I think Zeebe makes a lot of sense for a broad range of tech problems. I hear about a lot of people reaching for Kafka to achieve EDA. I have nothing against Kafka and I think it is a technically impressive piece of engineering. An experienced dev can probably get up and running with it in a day but they definitely won’t be using it optimally or even getting benefits from it for a long time because it is so complicated with tons of use case specific config required. Kafka is more low level than Zeebe and therefore applies to a broader range of use cases. However I think that if you are just using Kafka as an event bus to drive a collection of choreographed microservices, Zeebe may offer a simpler and actually more powerful solution. With Kafka things like event replay, retry strategies, and monitoring solutions at a business process level, are capabilities we need to build ourselves. Zeebe is appealing to me because they’ve found a way to generalize these needs. This allows us as developers to simply use their API to endow our apps with these advanced features and ultimately solve a business problem.
- Using Active Directory with Netskope (Part 2)
In Part 1, we discussed why you would want to integrate AD with Netskope, the AD integration tools Netskope offers, and briefly touched on the Netskope’s REST API. In part 2 we are going to dive deeper into using the REST API to add additional attributes and provide a sample Powershell script that provides additional automation capabilities. Keep in mind that this document is not intended to replace the existing Netskope documentation, and will not cover the implementation details for the Netskope tools, as that is already well defined. Netskope REST API and Directory Importer As was mentioned in part 1, not only does the Netskope REST API offer a variety of options to query data from Netskope, but it also provides the ability upload custom user attributes from a CSV or TXT file as long as it is in a specific format (Netskope documentation on file format and limitations can be found at Administrators > Administration > REST API > Add Custom User Attributes). While this sounds eerily similar to what the Directory Importer is capable of (with a little additional configuration), using the REST API provides a few distinct advantages. Let’s take a real world example. Your web proxy uses an employee’s email address as a user key in order to correlate traffic ‘events’ to a user, and you have configured the On Premises Log Parser (OPLP) to upload those events to Netskope. To add some additional user information to Netskope, you configure directory importer to use the mail (email) field to correlate when uploading all of your additional user attributes. But what happens when you have data from a different source, such as Forward or Reverse Proxy data, and the application is not using the email field as the user identity? (Perhaps it uses the employee ID from your HR system.) In this scenario, the events generated from that source are not going to contain any of the additional user information that was pulled from AD via the directory importer, because the user keys do not correlate with each other. That’s where the REST API comes in. As we mentioned in part 1, you can have duplicate rows in the CSV file, one for each possible key that can be matched against, which allows the uploaded attribute data to correlate all events, even when they are using different keys. Furthermore, with the REST API based solution, you can pull user information from sources besides AD (e.g. an HR system) and include it in your file to be uploaded. Additional User Attribute Script Netskope provides a bash script that can be used to upload a CSV or TXT file to your tenant. The script works as designed, but when trying to implement this as part of a solution a few issues need to be considered. It is likely that you will have already provisioned a Windows instance for the Directory importer. But since the Netskope provided script is a bash shell script, you need to provision a Linux instance in order to run the script. This is extra effort and extra cost. You will probably need to work with your Active Directory team to obtain an export from AD, format it, FTP it over to the Linux instance, and then run the shell script to upload it to Netskope. Parts of this can be automated, but as a solution it is cumbersome and potentially fragile. In an effort to simplify the solution, we have created a script (attached below) that performs the same functionality as Netskope’s bash script but performs it in Windows PowerShell. This means that you can run it on the same Windows instance as the Directory Importer, where it will extract the data from AD, build a correctly formatted CSV file, and upload it to your Netskope Tenant. Let’s go through the various parts. exportADAttributesToCSV This function is responsible for querying AD and creating the corresponding CSV file in the appropriate format in preparation for its upload to the Netskope tenant. There are lines in here that you must configure and they are as follows: Properties to pull from AD: change the values in the parentheses to your choosing. You should limit this to 15 attributes as that is Netskope’s current limit for custom attributes Names of the custom attributes to be uploaded: The first sw.WriteLine seen below is where the header row is written into the CSV file. These will be the names displayed within SkopeIt. Also, keep in mind the first column will be used as the key to correlate on (i.e. – mail). ManagerDisplayName function: This block will populate manager display name with an empty string if empty or null and if not, will either query AD for each user to get their displayName (heavy performance impact), or attempt to parse the manager’s name out of the canonical name. This can be removed if managerName is not one of the attributes pulled Adding records to the CSV: This section is where the actual records are added to the CSV file under the header row. This will need to be altered if you change the attributes in the header to match in order and count. If you have applications that use different keys, this is the place to add additional $sw.Writeline statements to add more than one record per user. jsonValue This function is responsible for parsing the responses from Netskope, trimming additional whitespace and returning the value at a given spot in the JSON body. getToken GetToken is responsible for using the NSUI and NSRestToken variables and calling the Netskope tenant to get a token for subsequent REST requests. This token will be used when uploading the custom attributes to the tenant. uploadFile This appropriately named function is responsible for the chunking (breaking into smaller pieces to upload) the CSV file and uploading it to the Netskope tenant. This is primarily a port of Netskope’s script over to PowerShell. getManagerName GetManagerName is responsible for attempting to parse the manager’s full name out of the manager’s canonical name and will require some alteration depending on the way the canonical name is formatted within AD. Currently, it is set for ‘CN=LastName\, FirstName MiddleName,OU=global,DC=mydomain,DC=com’. This function is an effort to reduce the number of requests made to AD and increase the scripts performance as it will not have to wait for multiple responses from AD. If managerName is not an attribute you intend to upload, or you elect to go the route of querying AD for the manager’s displayName, you can do that within the exportADAttributesToCSV function. Main Block The main block is responsible for allowing insecure HTTPS connections (the Netskope script also does this, but PowerShell has an issue with it), some timing logic for the logs and calling the various functions appropriately. Configuring Windows for Additional Attribute PS Script The script is a modified version of Netskope’s additional attribute shell script that is designed to run in Microsoft PowerShell. It requires the use of PowerShell v4.0 or higher, enabling the AD module for PowerShell in Roles and Features, and installing curl as PowerShell does not have native functionality to make multipart/form-data REST requests. Installing AD Module for PowerShell To install the AD Module for PowerShell, you must log on as an admin and follow these steps: Open Server Manager and select ‘Manage’ in the upper right corner and then ‘Add Roles and Features’ from the drop-down Select ‘Role-based or feature-based installation’ and hit next Select local server and hit next Jump to ‘Features’ in the left pane and scroll down to ‘Remote Server Administration Tools’ and expand Expand ‘Role Administration Tools’ and ‘AD DS and AD LDS Tools’ Select ‘Active Directory module for Windows PowerShell’ and hit next Confirm the installation and complete Installing Curl To install curl, you must log on as an admin and follow these steps: Download the latest curl installer for your windows environment from https://curl.haxx.se/download.html Install curl and accept all defaults Locate where curl executable is installed (likely C:\Program Files\cURL\bin if all defaults selected) and save this for the script variable configuration Creating Encrypted Password File To keep the AD user password secure, the following is a process to create a secure string and save it in an encrypted file. Since this process uses the Windows Data Protection API (DPAPI), you must do the following as the user that will be used to run the script. Failure to do so will result in the inability of the script to decrypt the password. Log into server as user that will be running the script Open powershell window and type “(Get-Credential).Password | ConvertFrom-SecureString | Out-File “ where is where you would like the encrypted password file to be stored. e.g. – (Get-Credential).Password | ConvertFrom-SecureString | Out-File “C:\PSScripts\ExportADUsers\SecurePW.txt” Save for script configuration Configuring Script Variables The following variables should be configured in the script to match your environment: DEBUG – toggle to increase logging when script issues are encountered. NSUI – domain name of Netskope tenant where attributes are to be uploaded NSRestToken – token used to make REST calls to Netskope tenant. If this value is changed on the tenant, then it must be updated in the script. maxChunkSize – if the AD export file is larger than this value, the file will be divided into chunks of this size and one chunk smaller or equally sized. The chunks will then be uploaded to the Netskope tenant in multiple parts. Recommended size is 5MB. path – path where scripts and files exist. Must end in /*.* csvFile – path and name of csv file (e.g. – “$path\ALLADUsers_$logDate.csv”) logFile – path and name of log file (e.g. – “$path\UploadCSVtoNS.log”) ADServer – AD server name (e.g. – ”mydomain.com”) searchBase – lowest level in AD and where queries are performed (e.g. – “OU=Global,DC=mydomain,DC=com”) user – user used to query AD. Must have read access to AD and be the same user that created the encrypted password file pwFile – location of the AD user encrypted password file. from the previous section Scheduling Script via Task Scheduler To keep up with an ever-changing global directory, this script should be automated to run to keep the user data on the Netskope tenant up to date. It is recommend to run daily using Windows Task Scheduler. If this needs to be changed: Log into the server as an admin Open Task Scheduler under Administrative Tools Browse under Task Scheduler Library – Netskope to find ‘Run AD Upload Script’ Right-click and select properties Under the tabs, these options should be set General When running this task, use the account that has access to AD and that you used to create the password file. Run whether user is logged on or not – enabled Run with highest privileges – enabled Triggers – One trigger should exist to start the script at a given time and it should be scheduled to terminate it if it goes over an allotted amount of time. Begin the task – On a schedule Set timing to preferred period (e.g. – Daily @ 0800 GMT) Stop task if runs longer than – should be set to less than the frequency your script is set to run. Enabled – enabled Actions – Should be set to start a program Program/script – path to PowerShell executable (e.g. – C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe) Add arguments – need to add script to execute once PowerShell is opened using ‘File’ argument and absolute path to script (e.g. -File “C:\PSScripts\ExportADUsers\NS_UserAttr_Upload_v1_0.ps1”) Conditions – None Settings Allow task to be run on demand – enabled Stop the task if runs longer than – enabled If the running task does not end when requested, force it to stop – enabled If task is already running, then the following rule applies – Do not start a new instance Maintenance Manual periodic cleanup of log files and csv files will need to be performed in the script directory so the server does not run out of storage space. Our recommendation is to clean files up at least once a month. Alternatively, you could implement those capabilities in your version of the script, or have a separately scheduled Powershell script to perform cleanup. To access the script, please click here. Paul Ilechko | Senior Security Architect Andrew Hejnas | Cloud Security Specialist & Solutions Architect
- API Czar, CodeSoju at the official Angular and Node Meetups in New York
A month ago, I had the pleasure of presenting API Czar & CodeSoju at the official AngularJS and Node.js Meetups, which were founded in 2010 & 2012 in New York City. API Czar is a rapid API development tool that enables teams to generate and deploy best practice based APIs, while CodeSoju is an open source initiative that provides a set of standards, best practices and tools to help developers during all phases of the development lifecycle. In these 2 presentations, I talk about how API Czar & CodeSoju work as well as the importance of the growing community of developers that support them. Both tools spiked a great discussion from the two communities, mainly because they solve everyday challenges they face. API Czar & CodeSoju derived from a key question we ask ourselves after completing any project: How can we maximize our team’s efficiency on large scale projects? For more information, please visit apiczar.io and codesoju.io.
- Cedrus Digital | Customized Digital Transformation
Line separator DIGITAL TRANSFORMATION SOLUTIONS DONE RIGHT Your company is unique; your technology solutions should be too. Our expert consultants advise and aim to tailor our solutions to your specific business needs with unprecedented speed and agility. Why trust your innovation to anyone else? WHAT WE DO FEATURED PARTNERSHIPS OUR VISION To deliver our customers innovative solutions with unparalleled levels of functionality and sophistication, bridging the present into the future, while bringing unprecedented operational efficiencies into their business. WHO WE ARE 82 % Repeat Customers 164 % Customer base growth over the last 4 years 44 % CAGR for the past 3 years About Cedrus We are a young, imaginative, diverse company, deeply rooted in successfully solving large business problems for the past 25 years. Our employees, partners, and customers share our zeal for excellence. Learn more about what makes us special. MEET CEDRUS Our Expertise We tame cutting-edge technologies to fit your business. Our extensive certifications, internal investments in assets and IP, and our experience from hundreds of transformational projects inform our wisdom as we take on new challenges. OUR EXPERTISE HOW WE'RE DIFFERENT We expertly guide you through discovery and design, architecture and development, and production and support, with an approach perfectly tailored to your unique business needs. Realizing your business' potential has never been easier! Learn about our offerings at no commitment, risk, or cost. WHAT WE DO See us in action! Get to know our work through some real-life examples. Take a look at some use cases, demos, and solutions that others are already implementing . USE CASES Grow with us! WE ARE HIRING! Want to be part of a dynamic, creative team? Want to join a company that will invest in you? Learn more about joining our team. JOIN US OUR PARTNERSHIPS Our close partnerships with some of the industry's top performers will give you the best experience in using their groundbreaking, industry-defining tools and platforms. By keeping an open mind, a scientific curiosity, and an objective outlook, we accommodate and adapt our recommendations to the context of your business. LEARN MORE WHAT WE'RE UP TO NOW Cleaner Microservice Orchestration With Zeebe+Lambda 14 Write a comment Automatic Text Summarization: Demystified 8 Write a comment AWS Blog: FADEL Reduces API Generation from Days to 30 Minutes on AWS Read about our success with Fadel on the AWS blog here: https://aws.amazon.com/partners/success/fadel-cedrus/ 3 Write a comment View More WHAT PEOPLE SAY Media and Entertainment Software Provider We were impressed with Cedrus' ability and assets to support us during our Cloud migration. Cedrus is not a typical systems integrator- they understand the underlying business challenge and help resolve it through cutting edge technology practices. Their unparalleled expertise in implementing and deploying APIs/microservices to the Cloud enabled us to quickly and smoothly transition our software offerings to AWS. READY TO GET STARTED ? Get in touch with us! We're thrilled to answer your questions and help you define a vision. Contact Us
- JOBS | Cedrus Digital
Line separator JOIN THE TEAM Want to join our stellar team of hard-working, talented, creative people? Drop us a line if you think you'd be a good fit for any of our job openings. COME WORK WITH US ! We are always looking for bright, driven individuals to join our slightly unconventional team. Are you looking to grow and learn in your career? Not afraid to speak your mind when you create? Want to be part of a team of unlike-minded peers? You may just be a great fit! You'll get amazing experience We have numerous clients in diverse lines of business, with expansive projects that will let you flex and expand your expertise. We will foster your growth Be part of an awesome team Our team has years of mentorship experience. We keep our team members up-to-date with the latest technology, providing resources for learning, a team learning culture, and supporting the acquisition of certifications. We're an unusual mix of people with diverse experience and a can-do attitude. Our team is motivated, hungry to learn, and eager to contribute and take ownership of their work. Our office space rocks Cedrus is located in the heart of Manhattan with easy bike access, endless coffee, and community collaboration and gatherings. You will have remote options Team members have the autonomy to work remotely as needed once they are up to speed with their team's needs and rhythms.
- Who We Are
OUR TEAM In the Era of Digital Transformation, where a lot of so-called strategic players talk the talk, we walk the walk. We think strategically but execute tactically. One step at a time, we will guide you through your Digital Transformation journey while helping you build skills internally. Our assets and IP, strategic partnerships, and long experience servicing some of the world’s largest firms ensure your success. OUR MISSION Assist our customers on their path to digital transformation. Bring them advice, expertise, and experience to allow them to securely expand their business and streamline their processes. OUR CULTURE Despite Cedrus’ relatively young age, our executive and senior team have been working together, solving large scale problems at some of the world’s largest companies, for more than 25 years. Our DNA and culture are deeply rooted in an uncompromising value system: fairness and ethics, inside and out. While being recognized for their industry and technology vision and leadership, our senior executives and senior managers remain rooted in the realities of projects and customers’ needs. This “shot of reality” is key to evolve our offerings, practices, investments and partnerships. OUR MORAL COMPASS Be on their side. Maximize the value to cost ratio. Deliver to exceed expectations. It's no surprise why more than 2/3 of our business is with repeat customers. With our customers: Fair chance for all. Merit-based place of work, with accountability and equal opportunity for growth. Our people are held to the highest ethics and standards, and we only hire people that respect and fight for such values. Cedrus is an Active Equal Opportunity Employer, with the mission to prove that business and social responsibilities can co-exist and lead to great successes. We reject and fight against any form of discrimination and hatred and are leading every day by example to eliminate all these forms of inequalities in society. With our people: MEET THE TEAM Nicolas Jabbour CEO Entrepreneur and Technology veteran. Leads the vision at Cedrus. At the service of our customers and our teams. Mike Chadwick SVP of Business Development and Sales Responsible for Sales and Partner Strategy. Believes in the union of technical skills and business value through a design-based approach. Passionate about delivering the right solution and implementing it the right way. Ashraf Souleiman Cloud Native Lead Cloud Practice Lead. Helps organizations implement their digital transformation vision and optimize their end-to-end digital experiences. Hanna Aljaliss Cognitive Automation Lead Focuses on developing business accelerators and innovations using process automation and artificial intelligence to derive business process improvement and operational efficiency. Kyle Watson Security Lead Guides and advances the cloud security consulting team. Ensures quality in delivery and customer success. Sets investment priorities in security tech. Charles Allen Human Resources To foster positive values and culture is key. A focused HR leader that helps culminate the team at Cedrus towards a forward movement. Matt Hejnas Principal Architect Cloud Integration Competency Lead with deep experience architecting, designing, and delivering integration solutions on and off prem, with APIs and Streams for our customers. Chris Brause UX Competency Lead Leading a team of innately curious designers who define the what, why, and how of usable, beautiful products. Paul Ilechko Netskope Consulting Services Lead Provides technical leadership and guidance to ensure successful project outcomes for our Netskope customers, working closely with our technology partners. Chris Dougherty Practice Solution Architect/SE Guides customers on selecting and implementing appropriate cloud security solutions to solve their business and regulatory needs. Mohamed Maher Architecture & Middleware Solutions Lead Solutions Architect with over 20 years hands-on experience on wide list of technologies. Expertise in designing and delivering solution around Business process automation and process improvements Ez Nadar AI Solutions Lead Head of AI Solutions. Focuses on transforming businesses into AI enterprises through the development & integration of conversational AI, NLP, and predictive analytics into their business workflow.