top of page


53 items found for ""

  • Our secret to building winning teams

    What's our secret to winning IBM's Build-A-Bot Challenge? It's the same as our secret to providing the solutions and high-level of leadership that clients have come to expect from such a small (but growing) firm: a real, tangible, company-empowered growth mindset. While many companies like to list employee growth and development as a bullet point in a list of values, Cedrus delivers on the promise to a depth that I haven't seen from anyone else. I'm relatively new to Cedrus, but I can already see the results of this mindset manifesting in all aspects of the company and how we stand out to our customers. I was told in my first interview that Cedrus cares less about what specific tools and technologies I know how to work with, and more about my capability to learn new things and contribute as part of a team, since new techniques can easily be learned, but one's ability to grow and be a value-adding collaborator are much harder to cultivate. Then, they back this mindset up with an encouragement to go learn whatever we feel is necessary to succeed and grow. Essentially: Take whatever time you need for learning Just don't bill the client for this time Feel free to expense any learning-related costs But we ask that you run it past management first if the cost will be significant And this goes into effect on day 1: I expensed some courses within my first week. Meanwhile, other companies impose limits, like requiring someone to be employed with the company for a year before any training expenses would be approved or annual spending caps that wouldn't be high enough for a conference or a certification. Further, this mindset extends far beyond structured or conventional training: Want to read some research papers? Please do! Wanna form a team to enter a hackathon? How can we help? The company realizes that self-directed, independent learning is crucial to not only growing someone's skills, but also keeping them engaged in their work, looking for ways to reward and empower people who go out of their way to better themselves since it betters the company at the same time. So when a coworker suggested that we form a team to enter the Build-A-Bot Challenge (with only about 2 weeks remaining), we didn't have to stress out about keeping this side-project secret; we knew that the company would support us in our efforts. Besides the intangible peace of mind, we also were able to do things like: poll our department for feedback on our solution idea, call out work on the project as part of our workload in status update meetings, and even post our submission video on our company YouTube channel. That's all well and good, but how does a project like this help Cedrus' clients? This IBM hackathon facilitated a level of knowledge transfer, experience building, and design thinking that could not be replicated through other means, and all in a short amount of time and at no additional cost. Together our team went through the entire development lifecycle, from architecting a solution, to planning sprints, developing a prototype, iterating the user experience, and deployment. While many of my team members have been through their fair share of full lifecycles, each time through is a learning opportunity, and I was able to learn so much from each of their strengths this time through, leaving me much better equipped to serve our clients and deliver solutions. And this hackathon isn't a rare occurrence; Cedrus also invests in R&D to ensure our consultants are well versed, knowing when cutting-edge research is the right solution for our customers' needs. Cedrus delivers growth for our clients by growing people who like to grow.

  • Audio data with deep learning - the next frontier

    Introduction Artificial intelligence (AI) has become an inherent part of our lives and machine learning is driving to solve new problems every day. With the recent advancements in deep learning, most AI practitioners agree that AI's impact is exponentially expanding year after year. This, of course, has been with the help of big data and unstructured data. These two types of data are not synonymous. Big data can be structured or unstructured and has three characteristics - volume, velocity (speed with which it arrives) and variety (different types like pictures, videos etc.) in data. Unstructured data, on the other hand, refers to data that is not organized or repetitive and is oftentimes large in volume and velocity and thus can be synonymously referred to as big data. Two well-known examples of unstructured data are images and text. Image data is used to solve complex computer vision problems such as facial recognition and autonomous cars. Text data, on the other hand, is used to solve Natural Language Processing (NLP) problems such as understanding spoken language or translating from one language to another (refer to our NLP blog here). Because of its applications, image and text data have received a lot of attention. Along with images and text, there's a third type of unstructured data - audio data. Audio data is less well known, and we'll be diving into it in this writing. This type of data comes in the form of audio files (.wav, .mp3 etc.) or streaming audio. Most applications of audio data are in the music domain in the form of music cataloging or lyric generation. Complexity of audio data has limited it from finding mainstream applications. This has changed with the rapid development in the field of deep learning. Audio data applications Audio data is used to build AI models that perform automatic speech recognition (ASR). ASR solves problems such as understanding what is said to a voice assistant such as Alexa or Siri or converting speech to text for applications such as voice bots and automatic medical transcription. In addition to ASR, audio data is also used to solve problems such as speaker detection, speaker verification and speaker diarization. Applications of speaker detection include Alexa’s ability to change responses based on who is speaking or identifying speakers in a live-streaming audio or video. An application of speaker verification is biometric security. Speaker diarization refers to separating audio to identify who is speaking “what” and “when”. A common application of speaker diarization is transcribing meeting recordings or phone conversations by speaker. As this technology ripens, many more applications are possible that would be based on conversations between people e.g., automatic test result generation for school verbal tests, mental health diagnosis based on conversations. Features of audio data Unlike text and image data, audio data has hidden characteristics in its signal which tend to be more difficult to mine. Most audio data available today is digitized. The digitization process stores audio signals by sampling them. The sampling rate varies by the type of media. For example, CD quality audio uses a sampling rate of 44,100. This means that audio data is sampled 44,100 times in a second and stored in a digital format. Each sample value represents the intensity or amplitude of the sound signal. This sampled data can be processed further to extract features depending on what kind of analysis is required. Spectral features that are based on the frequency domain are probably the most useful. Examples of such features and their applications are as follows (there are many more): 1. Mel Frequency Cepstral Coefficients (MFCC) - represents the envelope of time power spectrum which represent sounds made by a human vocal tract 2. Zero crossing rate – used to detect percussive sounds and a good feature for classifying musical genres 3. Average Energy – can represent formants which uniquely identify a human voice 4. Spectral entropy – used to detect silence A speech model will extract the above features depending on the application and use them in a supervised or unsupervised machine learning model or in a deep learning model. Models for speaker detection, verification and speaker diarization Models for speaker detection and speaker verification are classification problems. For speaker detection, audio features must be extracted for each speaker. The audio feature data can be fed to a neural network which can then be trained. Models for speaker diarization have historically been unsupervised clustering problems but newer models are based on neural networks. Speech model performance Performance of speech models have yet to overcome the following challenges: (1) poor accuracy in recordings of people of the same gender or of people with different accents (2) poor accuracy of speech to text due to language complexities and (3) inability to deal with background noises. The first challenge can be overcome with more training data. New methodologies in bringing together acoustic data and text data is addressing the second challenge. Speech denoising (removal of background noise) is another area that requires a lot of noise and quiet speech samples. Overall, one can expect that speech models will perform better with more varied data. This is the case for deep learning models in other areas as well. As building complex deep learning models becomes easier with various frameworks, a majority of the work is in understanding and preparing the data. Conclusion Audio data is coming into prime time along with its cousins – image and text data. The main driver for it has been deep learning. Applications such as voice assistants and voice bots have entered the mainstream due to this technology development. With a broad spectrum of models in the area of ASR, speaker detection, speaker verification and speaker diarization, we can expect a larger array of conversation-based applications. Crossing this frontier will require integrating multiple types of data and preparing them well so that they are ingested by advanced models to produce good predictions. About Cedrus: Cedrus Digital is involved in studying audio and conversation data and provides strategies on how information can be harvested from them. Cedrus Digital provides analytics and data science services to gain visibility into high volume call center inquiries – creating opportunities for process efficiencies and high value Conversational AI call center solutions and supplementation. Chitra Sharathchandra is a Data Scientist who enjoys working on implementable AI solutions related to multiple types of data

  • An Introduction to the World of Knowledge Graphs

    Introduction Data lakes, data warehouse, RDBMs, NOSQL, SharePoint, and Excel, today’s enterprises have an overabundance of data stored across different organization and technologies. However, the promise of big data’s ability to provide insights and revolutionize analytics has not fully materialized. In this post, we’ll take a look at why that is and how to achieve valuable insights into your data. Most companies today are data driven and there has been an explosion in volume driven by accessible and affordable storage. This is along with digitization of data and an explosion in IOT. Despite all this, many companies struggle with finding a return on investment. What drives this unrealized potential is not the lack of data or the storage/access technology but rather the lack of knowledge. This where Knowledge Graphs play a role in helping enterprises realize value in their data. Knowledge Graphs Empower AI Capabilities But what is a Knowledge Graph? A Knowledge Graph connects data across different sources (structured and unstructured) and provides a semantic enriched structure that enables discovery, insight and empowers AI capabilities. It can also be viewed as a network of objects with semantic and functional relationships between the different connected objects/things. The more relationships created, the more context the data objects/things have, which then provides a bigger picture of the whole situation, helping users make informed decisions with connections that they may have never found. Although a knowledge graph relies on a graph database (technology) to store and process data, it is the data, connectivity and ontology that transforms a graph database with properties to a knowledge graph. For example, an object node that has the name PAM has little meaning to a computer or an algorithm (and most individuals). There is no context to associate PAM with an infection or what relationships that infection may have with propagation mechanisms or preventive measures. A knowledge graph resolves this by labelling the PAM node as an infection; and by associating the node to an infection ontology an algorithm can start to understand the PAM entity in context with other node types (e.g., propagation mechanism, medication, preventive measures) that may also be in the knowledge graph. In summary a knowledge graph understands real-world entities and their relationships to one another.’ The Key Benefits of Knowledge Graphs Combine Disparate Data Silos: Knowledge Graphs help to combine disparate silos of data, giving an overview of all the organization’s knowledge – not only departmentally but also across departments and global organizations. Bring Together Structured and Unstructured Data: Knowledge Graph technology means being able to connect different types of data in meaningful ways and supporting richer data services than most knowledge management systems. In addition, any graph can be linked to other graphs as well as relational databases. Organizations will then use this technology to extract and discover deeper and more subtle patterns with the help of AI and Machine Learning technology. Make Better Decisions by Finding Things Faster: Knowledge Graph technology can help provide enriched and in-depth search results, helping to provide relevant facts and contextualized answers to specific questions. Knowledge Graphs can do this because of its networks of “things” and facts that belong to these “things”. “Things” can be any business objects or attributes and facets of these business objects, such as: projects, products, employees or their skills. Data Standards and Interoperability: Knowledge Graphs are compliant with W3C standards, allowing for the re-use of publicly available industry graphs and ontologies (e.g., FIBO, CHEBI, ESCO, etc.), as well as the ISO standard for multilingual thesauri. AI Enablement: Data from unstructured data sources up to highly structured data, can be harmonized and linked so that the resulting higher data quality can be used for additional tasks, such as machine learning (ML). Knowledge Graphs are the linking engine for the management of enterprise data and a driver for new approaches in Artificial Intelligence Knowledge Use Cases – Value Across Verticals Pharmaceutical Industry: Boehringer Ingelheim uses the extensive capabilities of Knowledge Graphs to provide a unified view of all their research activities. Telecommunications: A global telecom company benefits from the power of Enterprise Knowledge Graphs, helping to generate chatbots based on semi-structured documents Government: A large Australian governmental organization provides trusted health information for their citizens by using several standard industry Knowledge Graphs (such as MeSH and DBPedia etc.). The governmental health platform (Healthdirect Australia) links more than 200 trusted medical information sources that help to enrich search results and provide accurate answers. IT & IT Services: A large IT services enterprise uses Enterprise Knowledge Graphs to help them link all unstructured (legal) documents to their structured data; helping the enterprise to intelligently evaluate risks that are often hidden in common legal documents in an automated manner. Digital Twins and Internet of Things. The Internet of Things (IoT), considered as a graph, can become the basis of a comprehensive model of physical environments that captures relevant aspects of their intertwined structural, spatial, and behavioral dependencies. It can support the context-rich delivery of data for network-based monitoring, provide insight into customer pain points, and control of these environments with and extension to cyber-physical systems (CPS). Examples of this application are electric utilities with their extensive interconnectivity (wired and wireless), cyber security mandates and rich digital information (asset and customer) Better Understanding of the Individual. Whether as a human resource tool or a customer service enabler a Knowledge Graph centered on the individual can connect data from across multiple sources (training, reviews, purchases, returns) and enable insights and recommendations for individuals as well as organizations. Incorporating Knowledge Graphs In Your Organization If one or more of the following scenarios sound familiar, then a Knowledge Graph can provide value: There are communication breakdowns across domains, as your departments have different views on things, across organizations, as different departments have their own language, and because the nomenclature has changed, and things today are named differently than a couple of years ago. Getting the answer from existing systems is time consuming or fails because: there are so many systems, but they do not talk to each other, they all have different data models, and you need help to translate between them, data and information reside in multiple sources structured and unstructured (Excel, SharePoint, Word, Power Point, PDFs, CRMs, intranet sites) with no defined connection, you need experts to help wrangle the answers out of your systems, and you always use Google instead of internal tools to find things. You often keep wondering if you are missing insights: because you have documentation that relies on subject matter experts to infer meaning and insights, or your artifacts sometimes have obscure or inconsistent statements that are open for interpretation, or making the connections across domains, documents, and individuals is challenging or not feasible. About Cedrus: Knowledge Graphs are among a number of tools that Cedrus Digital utilizes in the AI transformation journey for companies of all sizes. If you’d like more information on Knowledge Graphs – including technology, life cycle, and how it can add value to your organization, please feel free to contact us Martin Cardenas

  • NLP Introduction - How AI Understands Our Communication Patterns

    Introduction I know like me, as a child, you probably daydreamed about how amazing life would be like to have a walking and talking robot. I used to imagine ordering my robot to make my bed, clean up my room and most definitely do all of my homework! I would have time to spend on the real things that were of much greater importance. Little did I know back then that it is a complicated task to communicate with a machine. Communication is often a two-way process. It is necessary to not only express our thoughts impeccably but to comprehend others’ words without our biases. It is a bonus if we can predict others’ words! It is clear that communication is critical, and to communicate well is a very desirable skill. According to a popular Holmes Report on “The Cost of Poor Communications”, the cost of inadequate communication is staggering for businesses. It is evident that companies that have leaders who are highly effective communicators have relevant business models, higher profits and at least 50% higher returns to shareholders. They constantly seek to learn how they can serve their customers and in turn grow their profits. Diving into The Complexity of Language So, what makes understanding human language so difficult to understand? It is something to ponder over - we humans are certainly not experts at communicating with each other. How can we communicate with a machine? The challenge is attributed to the dynamism of human language. For example, understanding the query intent is a complex process. If we remove all contextual information, then only the key words remain. This can be quite confusing! How would my robot comprehend my direction “Make my bed!”? Does this mean actually hammering wooden boards to construct a bed structure or does it mean to neatly arrange the bedsheets? This illustrates syntactical complexity in human language. We must keep in mind that all human languages have evolved over thousands of years of speech patterns. Language is essentially a fluid, living entity that develops with the needs and situations of communities. If we think about Shakespearean English and the English we speak today, we can easily notice the drastic contrast. According to Grammarly, in Shakespearean times (late 1500s – early 1600), the words bandit, lonely, critic, dauntless, dwindle, elbow, green-eyed (to describe jealousy) and lackluster were created. It is interesting to observe the turbulence during that time – resulting in the creation of new vocabulary. Additionally, the tone, sentiment and mentality of society was very different – all providing a distinctive filter on communication. The Challenge of Human-Machine Understanding As humans we can use our intuitions and communication experiences to understand even if it something not explicitly stated. In contrast, a machine lacks intuition. However, intuition can be developed with the “experience” of a larger corpus. For example, we have sufficient life experiences to understand what a lion is. A computer cannot comprehend what a lion is as an entity or define its attributes. A computer is exceptional at computing the probability of a lion moving as higher than the probability of a piano moving. Regardless of how many layers of natural language methodologies we implement and the quantity of text, it is impossible to recreate human intuition and experience in a machine. Our best approach in converging on the communication gap between human and machine is to represent words relative to other words within a corpus. Natural language processing (NLP) is an umbrella technology encompassing everything from text parsing to complex statistical methods used in deep learning. The aim is to enable communication between a machine and humans. NLP methodologies process human language. NLP allows computers to communicate and comprehend by reading, editing, summarizing text and even generating text. NLP’s Impact on Machine Learning There are many open-source techniques to aid a machine to understand text. Text embedding techniques, such as Word2vec (from Google) and GloVe (from Stanford), provide a general natural language methodology to cultivate an understanding of words, context, sentiment and intent. Words are represented as a dense vector in a highly dimensional sparse matrix. We build a dense vector so that it is similar to vectors of words that appear in similar contexts. Using these vector representations, it is possible to find how different words in a sentence relate to each other and how they relate collectively. The algorithm goes through each position of a word in the text. The word vectors are iteratively adjusted until this probability is maximized. If a word in a sentence is replaced with a word with a comparable vector representation, we can obtain a similar meaning for the sentence. Word2vec performs wonderfully when we have a large corpus. Word2vec training can be improved by eliminating stopwords from the dataset. Stopwords are high frequency words that add little value to language understanding, and their removal aids in improving model accuracy and training time. About Cedrus: If you’d like a guided approach for implementing NLP on your road towards Enterprise-AI, Cedrus Digital specializes in the AI transformation journey for companies of all sizes. Come work with our experts to set you on your path in not only NLP, but Conversational AI, Vision, Predictive Analytics, and Knowledge Graphs as well. We can help you brainstorm& prioritize use cases, as well as help with planning, management, and delivery of AI projects. Let’s partner together. In future blogposts I will dive deeper into the intricacies of text embedding and explore, with some technical rigor, other aspects of NLP required for successful implementation. Businesses have realized that unstructured and semi-structured data need to be mined using NLP rather than rely on outdated manual or template-based techniques. Natural Language Processing is powerful tool that has an immense impact on the business model and serving the customer. Stay tuned! Swati Sharma, Ph.D. is a Senior AI Solutions Engineer at Cedrus Digital. She teaches and mentors future data scientists and works with clients to create solutions for complex business problems.

  • Steps for a successful migration to Red Hat OpenShift Service on AWS (ROSA)

    Enterprises have an abundance of options when considering their long-term modernization and container strategies. With constant innovation and associated change in this maturing space, it is important to choose a path wisely and to strive for consistency and predictability to maximize what really matters most at the end of the day, business value. As enterprises look to optimize their Kubernetes strategies, Red Hat OpenShift quickly emerges as a leading platform to build upon for a variety of reasons. Aligning containerization benefits with enterprise cloud strategies brings forth another no-brainer in leveraging the market leader, Amazon Web Services (AWS), to drive the ultimate combination of flexibility, scalability, and simplicity, which are historically uncommon attributes when referencing enterprise platform architecture. AWS and Red Hat have been collaborating for years to drive innovative solutions in the rapidly evolving space of enterprise technology. The latest release of Red Hat OpenShift Service on AWS (ROSA) combines the best of both worlds to accelerate application modernization with native streamlined efficiencies that enterprises are seeking in today’s increasingly complex enterprise architecture space. For customers looking to embrace the business benefits that ROSA introduces, it is important to map out a winning strategy to ensure a successful migration. Here are 5 key considerations to consider when planning your ROSA migration. 1) Infrastructure and Platform Analysis · Ensure you have complete visibility of your legacy platform for all applications. Leave no stone unturned. · Identify and plan for all platform gaps and application dependencies. · Map functionalities to OpenShift early in the process. 2) Application Analysis · Catalog all critical applications with key attributes such as internal and external dependencies, underlying language, and framework (with versions). 3) CI/CD Analysis · Validate the accuracy of your current pipeline process from a build, dev, release, prod, and validation perspective. · Adapt pipeline stages to leverage OpenShift standards like Helm and OpenShift Operators. · Plan for future native pipelines including Tekton. 4) Code Automation Analysis · Analyze code for trends to remove and replace or update code as needed. · Consistently feed your tooling to drive measurable improvement with each application. · Slow down to speed up, embrace automation at every possible step. 5) Plan for optimization · Embrace OpenShift’s simplified options for automation and governance as code. · ROSA allows for management though the familiar OpenShift interfaces, with simple integration to a growing list of cutting-edge AWS services. · Identify and plan for the art of the possible once you have reached your destination. Focus on accelerating simple use cases as a first step. Lean on the experts from AWS, Red Hat and their partner ecosystems to help ensure your migration is a success that clearly articulates the associated business value. It is important to include assumed benefits that ROSA also represents with greater ability to focus on innovation and business value by leaving the management and support burden up to the experts at Red Hat and AWS. ROSA introduces a low friction, streamlined platform that will uncover areas of efficiency that are unique to each customer and opens endless possibilities for ongoing focus on modernization and innovation. About Cedrus - Cedrus designs, develops, and implements modern cloud applications that drive digital transformation at global brands. We are a trusted advisor for design thinking, innovation and modernization founded on expertise in cloud security, cloud native application development, cognitive business automation and systems integration.

  • Conversational AI Best Practices - Discover AI Opportunities And Their Impact To Your Organization

    Congratulations! You’ve made the decision to continue building yourself as an AI Enabled Enterprise by pursuing the world of Conversational AI. With customizable dictionaries, trainable audio models, and sentiment analysis to capture a caller’s tone, Chat Bots are becoming more viable supplements to your call centers with each passing year. Now that you’re invested, where do you begin? How do you successfully bring a Conversational AI into production? How do you even know which use cases are best worth pursuing? In my 4-Part Best Practice Series , I'll cover the below 9 steps to get a chatbot into production, broken up into business and development categories. In this article, we'll be covering the first two: Business 1. Discover Conversational AI Opportunities 2. Determine Opportunity Impact and Feasibility 3. Plan your project 4. Change Management Development 1. Pre- Development Essentials 2. Data Preparation 3. Test your new Chat Bot 4. Deploy and Maintain your Chat Bot 5. MLOps and Continuous Improvement Business 1. Discover Conversational AI Opportunities The first step to implementing a Chat Bot in your organization is to see where it can have the biggest impact. To do this, it’s recommended to tie your Chat Bot opportunities to overarching company goals, your teams goals, and any problem areas it can help with. For example, say you manage the operational infrastructure of your company’s call center and they’re focused on insurance, you can use the following criteria: Company Goal · Increase membership by 20% Team Goals · Handle 20% more call volume · Improve Net Promoter Score by 1 point · Reduce call duration Problem Areas · Current CSRs cannot keep up during peak hours, resulting in upwards of 1 hour caller wait time · Ramping up and retention of CSRs make it difficult and expensive to scale with growing demand · Low number of SMEs causes calls to drag on while CSRs wait to receive the answers to satisfy Member callers Using the above, you can think about how Conversational AI can help you meet your goals and resolve trouble areas. Let’s break down one of the above pain points and formulate it into a use case: Problem Statement: Members experiencing 1 hour wait times during peak hours Causes: · As flu season comes up, members are calling in to check their coverage · At end of month, members are calling in to make their payments via phone · Claim Escalations require the CSR to reach out to an internal SME, who are limited and require members to once again go on hold How can a chat-bot help? · An FAQ chat bot can answer questions on coverage, freeing up CSRs from redundant questions · A payment chat bot can process payments directly from users without the need of a CSR · CSRs can internally interact with a chat bot to extract data from knowledge bases, reducing the load on SMEs and replying quicker to members Looking at the above, by framing company goals and pain points, you were able to think up of 3 different Conversational AI use cases. Using this method, you can brainstorm many targeted, relevant opportunities to bring Conversational AI into your business. Now that you have your ideas, how do you decide which to tackle first? That will take us to our next step 2. Determine Opportunity Impact and Feasibility Now that we have a list of use cases, the next step is to vet them out and prioritize them. We do this by asking ourselves two different questions: · How much value and impact can this AI opportunity provide? · How complicated will it be to implement? From the use cases we came up with in Step 1, let’s continue this exercise with the Coverage use case. We’ll start by answering the above questions. Impact We determined in the brainstorming exercise that one of the large goals of the company Is to increase membership by 20% in the next year. That will mean call centers need to be scaled up to handle an upwards potential of 20% more calls. Looking at the problems we’ve listed above, hiring, training, and retaining CSRs has not been a trivial task – and handling 20% more call volume while scaling up the team is going to be a difficult and expensive process. All of the use cases we came up with can help alleviate CSRs or SMEs, and now it’s our job to find out by how much. Say that we are able to pull analytics on the conversation transcripts from these member calls. We find that 15% of all calls coming in are members asking questions related to Coverage. With that information, we can start crunching numbers: Assume the following: · 1000 currently employed CSRs · Avg CSR salary is $35,000 a year · Growth goal: +20% That means: · Current spend: 1,000 x $35,000 annually = $35,000,000 · Future Spend: 35,000,000 x 1.20 = $42,000,000 · Increased cost: $7,000,000 Coverage Bot Potential: · 15% of call volume is related to Coverage · Assume that 33% of callers who interact with the bot will ask to be routed to a CSR immediately, which means · 10% of call volume can be covered by Coverage chat bot Putting these numbers together, we can come up with the below value potential: · Future Spend w/ Coverage Chat Bot = Current Spend x (Growth Target – Coverage Bot) · In other words, $35,000,000 x (1.20 – 0.10) = $38,500,000 This gives us our Cost savings estimate for the Coverage chat bot: · Future Spend - Future Spend w/ Coverage Bot = $42,000,000 - $38,500, 000 = $3,500,000 annual cost savings By implementing the Coverage chat bot – we can save this health insurance company $3,500,000 per year in operational costs related to expansion, and even further as growth exceeds the target of 20%. Feasibility We’ve just determined that implementing the Coverage chat bot can save an estimated $3,500,000 during its first year of implementation. Now the next step is figuring out how complex seeing the project to completion will be. To determine how feasible the Coverage chat bot is, we need to ask ourselves a few questions from two different fronts – the business end and the technical end. Let’s start with the business end: Business Questions: · Does the project have an executive sponsor? · Do we have the correct resources available to deliver this project? Do they have capacity? (text/audio conversation transcribers, Dialogue Designers, Bot Developers, backend developers, Voice Engineering, etc.) · How much change management will this require downstream? Does this impact how CSRs currently do their job? Will they need additional training? · How about risk and compliance? Is there PII/PHI data? What about HIPAA? Technical Questions: · Do we need the bot to support chat or voice? Does it need both? · How many conversation logs do we have logged or recorded? How difficult will they be for transcribers to access? · Has a tool suite been selected or do we need to POC different products? If not, how much will procurement of the tools and infrastructure cost? · How complex will the infrastructure workflow look? How many APIs would we need to access? Is it expected to run on-prem? Cloud only? Hybrid? · Is there a DevSecOps workflow that can be utilized for CI/CD, operationalizing, and monitoring? How about MLOps to track performance of any Speech To Text models over time? There is a lot to consider, but answering the above when vetting a new potential Conversational AI use case can save a lot of pain upfront when you determine the level of effort and understand gaps or risks involved. Putting it all together With a value proposition and feasibility determined, you can properly create a roadmap of Conversational AI use cases to implement over time and prioritize them accordingly. Also, if you are looking for internal sponsorship, having clearly defined value propositions goes a long way in securing executive interest. About Cedrus: If you’d like a guided approach in brainstorming and planning Conversational AI opportunities on your way to Enterprise-wide AI, Cedrus Digital specializes in the AI transformation journey for companies of all sizes. Come work with our experts to set you on your path in not only Conversational AI, but NLP, Vision, Predictive Analytics, and Knowledge Graphs as well. Besides brainstorming, we specialize in the planning, management, and delivery of AI projects. Let’s partner together. Like what you’ve read? Stay tuned to our blog for regular posts from our AI experts including best practices, tool selection, the value that each area of AI can provide, case studies, use cases, and more! Ez Nadar is Head of AI Solutions at Cedrus Digital. He helps customers brainstorm, prioritize, plan and deliver Enterprise-Wide AI solutions

bottom of page