Artificial Intelligence & Machine Learning,
Data Privacy,
Data Security
Explainability, Cost, Compliance Drive AI Choices in Enterprises

In the evolving landscape of artificial intelligence, while technologies have become democratized and more accessible, experts caution against the uncritical adoption of large language models (LLMs). Sujatha S. Iyer, head of security at ManageEngine, a division of Zoho Corp., emphasizes the necessity of vigilance in deploying AI tools, particularly in enterprise contexts.
Iyer remarks, “Not everything is an LLM problem just because it is the hype.” Although AI has significant utility—including summarization and content generation—its indiscriminate application can lead to inefficiencies that traditional machine-learning methods might resolve more effectively.
The Need for Transparency
In high-stakes scenarios like predicting outages or detecting fraud, explainability becomes paramount. Enterprises require models that not only predict outcomes but also articulate the rationale behind their predictions. “If my enterprise software indicates an 80% chance of an outage, there must be an explanation,” Iyer asserts. Traditional models excel in providing this clarity, enabling stakeholders to make informed decisions swiftly.
The push for explainability is further fueled by regulatory requirements, especially within financial institutions tasked with adhering to stringent compliance standards. In such environments, explainable AI is not just advantageous but essential for meeting these obligations.
Cost Concerns and Computational Efficiency
The financial implications of AI deployment are driving many enterprises to reconsider their technological approaches. Iyer comments on the “GPU tax” associated with high-performance models, warning against the exorbitant costs tied to unnecessary computational resources. Current estimates suggest that compute expenses account for 55% to 60% of OpenAI’s total operating costs, raising alarms over the sustainability of excessive GPU utilization.
Research indicates that classical machine learning models are increasingly resource-efficient, often operable on standard laptops or minimal cloud setups, thereby accelerating deployment timelines without the overhead associated with deep learning frameworks. This efficiency permits organizations to implement predictive analytics without the burden of managing vast datasets typically needed for LLM development.
Digital Maturity Matters
The success of AI initiatives also hinges on an organization’s digital maturity. Many companies are still in the foundational phase, working to digitize their operations. Iyer points out the challenges of utilizing non-digitized data, which complicates analytics and impedes progress in AI application.
This observation aligns with findings from the MIT CISR Enterprise AI Maturity Model, indicating that 28% of businesses remain in the initial stages of AI adoption, focusing on workforce education and policy formulation before scaling to advanced implementations.
Nagaraj Nagabhushanam, vice president of data and analytics at The Hindu Group, noted that traditional AI has long been instrumental in operational systems such as recommendation and action systems, combining heuristic approaches with established natural language processing models for optimal performance.
Compliance-Driven AI Development
The increasingly strict regulatory landscape necessitates controlled AI development practices. Iyer mentions that enterprises often train models exclusively on commercially licensed datasets, ensuring data privacy and compliance. Such practices reflect broader concerns regarding AI governance and regulatory compliance.
Research from KPMG illustrates the significance of tools designed for interpretability, which facilitate transparent AI decision-making while safeguarding proprietary data. These approaches not only bolster compliance but also enhance stakeholder trust.
Practical AI Solutions
As enterprise needs are distinctly contextual, Iyer argues that employing excessively large models is often unwarranted. The application of traditional machine learning techniques typically yields high accuracy at a fraction of the cost compared to deep learning methods, making them particularly relevant for sectors like finance and healthcare.
Despite this, organizations are not shunning LLMs entirely. Zoho’s research teams are exploring models with varying parameters and “mixture of experts” architectures, aiming to combine efficiency with capability. Notably, 78% of organizations report using AI in some capacity, showcasing a marked increase from the previous year.
The most effective deployments tend to leverage hybrid methodologies, strategically intertwining traditional machine learning models with LLMs to optimize outcomes while addressing real-world business challenges.