Zoho Corporation plans to employ NVIDIA’s AI-accelerated computing platform, including NVIDIA NeMo, to develop and deploy large language models (LLMs) in its Software as a Service (SaaS) applications. This development was unveiled on October 24, 2024, during the NVIDIA AI Summit in Mumbai. The LLMs will be made available to Zoho’s global client base of over 700,000 through ManageEngine and Zoho.com. Zoho has already spent more than $10 million on NVIDIA’s AI technology and GPUs, with plans to invest another $10 million the following year.
Ramprakash Ramamoorthy, Director of AI at Zoho Corporation, stressed the company’s commitment to building LLMs tailored to various commercial applications. He stated that many exist. LLMs are specifically for different commercial applications. He claimed that many existing LLMs are consumer-oriented and lack corporate utility.
Zoho’s strategy uses its extensive technology stack to integrate context into AI, hence increasing its effectiveness. The company prioritizes user privacy by ensuring that its models comply with privacy standards from the start, to provide businesses with a quick and profitable return on investment with NVIDIA’s AI software and accelerated computing.
For more than a decade, Zoho has integrated artificial intelligence into its varied portfolio of over 100 products across the ManageEngine and Zoho divisions. The company uses a multi-modal AI technique to derive contextual intelligence, which helps users make informed business decisions. Zoho is creating narrow, small, and medium-sized language models, which are unique from LLMs, to meet a variety of use cases. These approaches provide businesses with the option to select AI solutions that meet their objectives while balancing performance.
Vishal Dhupar, Managing Director, Asia South at NVIDIA, commented on the partnership, saying that Zoho’s use of NVIDIA’s AI software and accelerated computing platform enables the building of a wide range of models to satisfy a variety of business needs. Zoho is improving its LLMs on NVIDIA’s platform by leveraging NVIDIA Hopper GPUs and the NVIDIA NeMo end-to-end platform for custom generative AI development. The company is also evaluating NVIDIA TensorRT-LLM to optimize its LLMs for deployment, with a 60% boost in throughput and a 35% reduction in latency over previous open-source frameworks. Furthermore, Zoho is increasing workloads such as speech-to-text using NVIDIA’s platform.