Cloudera Unveils AI Inference Service with Embedded NVIDIA NIM Microservices to Accelerate GenAI Development and Deployment

Cloudera's AI Inference service boosts LLM performance speeds by 36x using NVIDIA accelerated computing and NVIDIA NIM microservices, providing enhanced performance, robust security, and scalable flexibility for enterprises

Combined capability brings together companies’ differentiators in a single offering: Cloudera’s trusted data as the foundation for trusted AI with NVIDIA accelerated computing and the NVIDIA AI Enterprise software platform to deploy secure and performant AI applications privately on Cloudera

SANTA CLARA, Calif and NEW YORK, Oct. 08, 2024 (GLOBE NEWSWIRE) -- Cloudera, the only true hybrid platform for data, analytics, and AI, today launched Cloudera AI Inference powered by NVIDIA NIM microservices, part of the NVIDIA AI Enterprise platform. As one of the industry’s first AI inference services to provide embedded NIM microservice capability, Cloudera AI Inference uniquely streamlines the deployment and management of large-scale AI models, allowing enterprises to harness their data’s true potential to advance GenAI from pilot phases to full production.

Recent data from Deloitte reveals the biggest barriers to GenAI adoption for enterprises are compliance risks and governance concerns, yet adoption of GenAI is progressing at a rapid pace, with over two-thirds of organizations increasing their GenAI budgets in Q3 this year. To mitigate these concerns, businesses must turn to running AI models and applications privately - whether on premises or in public clouds. This shift requires secure and scalable solutions that avoid complex, do-it-yourself approaches.

Cloudera AI Inference protects sensitive data from leaking to non-private, vendor-hosted AI model services by providing secure development and deployment within enterprise control. Powered by NVIDIA technology, the service helps to build trusted data for trusted AI with high-performance speeds, enabling the efficient development of AI-driven chatbots, virtual assistants, and agentic applications impacting both productivity and new business growth.

The launch of Cloudera AI Inference comes on the heels of the company’s collaboration with NVIDIA, reinforcing Cloudera’s commitment to driving enterprise AI innovation at a critical moment, as industries navigate the complexities of digital transformation and AI integration.

Developers can build, customize, and deploy enterprise-grade LLMs with up to 36x faster performance using NVIDIA Tensor Core GPUs and nearly 4x throughput compared with CPUs. The seamless user experience integrates UI and APIs directly with NVIDIA NIM microservice containers, eliminating the need for command-line interfaces (CLI) and separate monitoring systems. The service integration with Cloudera’s AI Model Registry also enhances security and governance by managing access controls for both model endpoints and operations. Users benefit from a unified platform where all models—whether LLM deployments or traditional models—are seamlessly managed under a single service.

Additional key features of Cloudera AI Inference include:

  • Advanced AI Capabilities: Utilize NVIDIA NIM microservices to optimize open-source LLMs, including LLama and Mistral, for cutting-edge advancements in natural language processing (NLP), computer vision, and other AI domains.
  • Hybrid Cloud & Privacy: Run workloads on prem or in the cloud, with VPC deployments for enhanced security and regulatory compliance.
  • Scalability & Monitoring: Rely on auto-scaling, high availability (HA), and real-time performance tracking to detect and correct issues, and deliver efficient resource management.
  • Open APIs & CI/CD Integration: Access standards-compliant APIs for model deployment, management, and monitoring for seamless integration with CI/CD pipelines and MLOps workflows.
  • Enterprise Security: Enforce model access with Service Accounts, Access Control, Lineage, and Auditing features.
  • Risk-Managed Deployment: Conduct A/B testing and canary rollouts for controlled model updates.

“Enterprises are eager to invest in GenAI, but it requires not only scalable data but also secure, compliant, and well-governed data,” said industry analyst, Sanjeev Mohan. “Productionizing AI at scale privately introduces complexity that DIY approaches struggle to address. Cloudera AI Inference bridges this gap by integrating advanced data management with NVIDIA's AI expertise, unlocking data's full potential while safeguarding it. With enterprise-grade security features like service accounts, access control, and audit, organizations can confidently protect their data and run workloads on prem or in the cloud, deploying AI models efficiently with the necessary flexibility and governance.”

“We are excited to collaborate with NVIDIA to bring Cloudera AI Inference to market, providing a single AI/ML platform that supports nearly all models and use cases so enterprises can both create powerful AI apps with our software and then run those performant AI apps in Cloudera as well,” said Dipto Chakravarty, Chief Product Officer at Cloudera. “With the integration of NVIDIA AI, which facilitates smarter decision-making through advanced performance, Cloudera is innovating on behalf of its customers by building trusted AI apps with trusted data at scale.”

“Enterprises today need to seamlessly integrate generative AI with their existing data infrastructure to drive business outcomes,” said Kari Briski, vice president of AI software, models and services at NVIDIA. “By incorporating NVIDIA NIM microservices into Cloudera's AI Inference platform, we're empowering developers to easily create trustworthy generative AI applications while fostering a self-sustaining AI data flywheel.”

These new capabilities will be unveiled at Cloudera's premier AI and data conference, Cloudera EVOLVE NY, taking place Oct. 10. Click here to learn more about how these latest updates deepen Cloudera’s commitment, elevating enterprise data from pilot to production with GenAI.

About Cloudera
Cloudera is the only true hybrid platform for data, analytics, and AI. With 100x more data under management than other cloud-only vendors, Cloudera empowers global enterprises to transform data of all types, on any public or private cloud, into valuable, trusted insights. Our open data lakehouse delivers scalable and secure data management with portable cloud-native analytics, enabling customers to bring GenAI models to their data while maintaining privacy and ensuring responsible, reliable AI deployments. The world's largest brands in financial services, insurance, media, manufacturing, and government rely on Cloudera to use their data to solve what seemed impossible—today and in the future.

To learn more, visit Cloudera.com and follow us on LinkedIn and X. Cloudera and associated marks are trademarks or registered trademarks of Cloudera, Inc. All other company and product names may be trademarks of their respective owners.

Contact

Jess Hohn-Cabana
cloudera@v2comms.com


Cloudera Unveils AI Inference Service with Embedded NVIDIA NIM Microservices to Accelerate GenAI Development and Deployment

THỦ THUẬT HAY

Phân biệt và đánh giá ưu nhược điểm của định dạng FAT32, NTFS và exFAT

Hôm nay hãy cùng FPTShop đánh giá ưu nhược điểm của ba định dạng FAT32 NTFS và exFAT. Đây là ba định dạng hệ thống tệp tin phổ biến nhất hiện này.

Xóa lịch sử tìm kiếm dễ dàng trên iPhone chỉ với vài thao tác

Đôi khi, việc xoá lịch sử tìm kiếm trên iPhone trở nên rắc rối với nhiều người dùng, bởi vì họ không muốn phải trải qua các quy trình phức tạp để tìm lại từ khoá mà họ cần. Tuy nhiên, thao tác xóa lịch sử tìm kiếm là

5 ứng dụng SMS tốt nhất nên sử dụng cho smartphone Android

Mặc dù trên mỗi thiết bị Android hiện nay đều được tích hợp sẵn một ứng dụng nhắn tin với đầy đủ chức năng hỗ trợ cơ bản, nhưng bạn cảm thấy nhàm chán và muốn trải nghiệm những giao diện hoặc tính năng mới mẻ hơn? Sau

Tại sao tất cả lõi trong CPU luôn có cùng một tốc độ?

Nếu từng so sánh khi mua CPU mới, có thể bạn sẽ nhận ra rằng tất cả lõi CPU đều có cùng một tốc độ. Bài viết hôm nay sẽ cho bạn biết vì sao.

Sử dụng Instagram khi đang offline trên Android

Trước đây, khi người dùng Instagram cố gắng đăng bài với kết nội mạng yếu hoặc không có...

ĐÁNH GIÁ NHANH

Thiết kế OnePlus 5: không mới nhưng thân thiện, nhẹ, dễ cầm

Nếu anh không làm ra được một cái hay hơn cái có sẵn thì hãy học hỏi từ nó rồi nhúng những nét riêng của anh vào. Đây chính xác là những gì OnePlus đang làm.

Đánh giá Lenovo Tab 3 7 Essential: Trải nghiệm tốt trong tầm giá

Với mức giá hợp lý, Lenovo Tab 3 7 Essential thực sự là một thiết bị chất lượng mang đến trải nghiệm tốt trong phân khúc phổ thông.

Đánh giá chi tiết Volkswagen Tiguan Allspace: Xứng với giá 1,729 tỉ đồng

Volkswagen Việt Nam chọn phân phối duy nhất bản 7 chỗ VW Tiguan Allspace giá 1,729 tỉ đồng nhằm phát huy những lợi thế 'độc tôn' mà các đối thủ khác không có được.