Year after year, Artificial Intelligence (AI) and Machine Learning (ML) research and development reaches new heights. In 2020, we saw impressive human-like text generation created by OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) and Boston Dynamics’ robots dancing with contagious moves and perfect synchronization, putting our dancing abilities to shame with what they have termed “Athletic Intelligence.” While these developments are enthralling, your organization’s needs for AI/ML are likely very different. Last year, I shared some of the emerging trends in AI/ML. This blog is to help you identify the top AI/ML trends to watch, which may influence your AI/ML strategy.
MLOps — Automating the management of the automation
Many organizations are experimenting with AI/ML and identifying various use cases that can be solved using AI/ML. But few have been able to build production ML models. Productizing AI/ML models is a continuous process. It requires continuous data collection and wrangling to capture new and relevant data to avoid data drifts that lead to model degradation over time. Moreover, the model needs to be (re)trained, tuned, measured, and deployed to support updates to the model to ensure they perform accurately and relevantly. The de facto use case for AI/ML is automation, but AI/ML management itself needs to be automated.
The cloud makes it easier to build ML models and workflows, but there is a growing need for orchestration of these workflows. The use of containers led to the creation of orchestration platforms like Kubernetes, AWS ECS, etc. Similarly, the need to productionize and orchestrate ML workflows lead to MLOps. MLOps brings data science, engineering, and DevOps practices together with the benefits of CI/CD, versioning, and automation of ML workflows to build and scale ML models in production. New MLOps managed services such as AWS’ Amazon SageMaker Pipelines simplify CI/CD for ML models. Open source MLOps platform MLFlow by DataBricks helps manage ML lifecycle. MLOps will gain momentum as organizations start productionizing their ML models.
Explaining is your responsibility
ML models are often referred to as a “black box” because one does not know how it determines a decision, but you will still need to explain the model behavior. Explainability is required for the following reasons:
- You will be accountable when your users ask questions such as:
- “Why was I given this recommendation?”
- “Why was I denied a loan application?”
- “Why was my job application rejected?”
- You will need to explain the model’s credibility to your internal stakeholders, legal and privacy teams, and executives.
- It is a compliance requirement in certain regulated industries (e.g. finance and banking, real estate, healthcare, etc.) to prove that the decision made by the model is not biased.
Explainability helps identify and mitigate bias and build better ML models. AWS introduced Amazon SageMaker Clarify — a tool that provides visibility into your model throughout ML workflows. Google’s What-If Tool is a visual interface designed to probe models, understand what features dominate the model, dive deep into data points, and then analyze those data points. There are interesting approaches to explainability, like LIME (Local Interpretable Model-Agnostic Explanations), SHAP (SHapley Additive exPlanations), SAGE (Shapley Additive Global importancE), etc. There is even new research to explain model behavior by removing features. Additionally, MLOps improves auditability for your models by versioning the models and datasets.
Explainability in ML models is as important as testing for production applications. Compliance and regulatory requirements will be a forcing factor to adopt explainability.
Making data coherent with context
In the past few years, we saw a tremendous appetite for charts and visualizations providing business intelligence (BI). Organizations realized that many were mere reports and not actionable. Consider these important questions:
- “What does your user’s current interaction say about what they are looking for next?”
- “What is the intent of the user reaching out for your services and support?”
- “Are your customers happy with your product?”
Important signals customers provide often go unrecognized by traditional data analytics and reporting. Unlocking the value of data provides insights, but the value doesn’t just lie in the data — it lies in the context. Whether your legal team is responding to legal or privacy requests, your IT team is performing a forensic investigation, your support team responding to customer issues, or your sales team is trying to convert a prospect to a customer, the context lies in the natural language of customer interactions.
The data context can be identified with natural language understanding and querying, entity recognition, sequence analysis (including bi-directional representation from transformers like BERT), sentiment/intent analysis, data classification, data enrichment, clustering, graphical relationship representations (including Graph Neural Networks), or knowledge graphs, etc. — thus opening different avenues to data analysis and recommendations. Check out Druva’s blog on extracting value from data using Natural Language Processing.
Organizations will find themselves needing to build product features, which require data context analysis. They will build or buy context-driven solutions to support services, sales, marketing, legal, and privacy needs. You can learn more here about how Druva leverages machine learning to build search, analytics, and data discovery solutions for customers.
Takeaways
Organizations are learning how to integrate AI/ML into their business. MLOps will help productize AI/ML. Explainability will drive better, more credible ML models, and satisfy regulatory compliance. Finally, advanced organizations will start expanding their use cases by extracting more value from data by analyzing it with context.
Looking to further explore the exciting technology behind machine learning? I authored a two-part blog on the subject in March, 2020; read the series here. You can also dive deeper into ML architecture by reading this blog on Druva’s unusual data activity (UDA) detection platform.