GyanAI Launches the World’s First Explainable Language Model and Research Engine

  • Novel abstract language model set to transform research and knowledge discovery functions across industries
  • Capitalizes on potential of AI to add up to $15.7trillion to the global economy by 2030*

NEW YORK, March 15, 2023 (GLOBE NEWSWIRE) -- GyanAI today announces that it is launching the world’s first explainable language model and natural language understanding engine.

Based on its proprietary technology, GyanAI delivers on the promise of explainable AI with a model that understands ‘meaning’ as close as possible to the way humans do. GyanAI is fully explainable and a user can trace all of its results back to its source. It can provide reasoning for its output. GyanAI’s output is a mix of extractive and generative text.

McKinsey estimates that the number of AI capabilities used by organizations (including natural-language generation) has doubled between 2018 and 2022. As more and more organizations depend on AI systems for insights to make critical decisions that can have considerable implications, it is crucial for them to have access to trusted results and conclusions. Explainable AI is therefore growing in importance and, according to research, companies with at least 20% of their bottom-line returns attributed to the use of AI are more likely to follow advanced AI best practices that enable explainability, while those that build digital trust with consumers through measures such as making AI explainable can see a 10% or higher increase in revenue (McKinsey, 2022).

Breaking out of the ‘Black Box’ Transformer Model
Large Language Models (LLMs) are neural networks trained on huge amounts of natural language content. LLMs are black boxes and purely a function of training data. They are designed to predict the next word in a sequence. They have well-documented limitations such as explainability, reasoning and factual errors, inability to understand full compositional context, susceptibility to influence from biased training data, and tractability among others.

In contrast, GyanAI is a knowledge engine based on a content-independent language model capable of deep understanding of textual discourse without relying on word patterns. GyanAI acquires its knowledge near real-time from the documents it processes and optionally, one or more Gyan Knowledge Stores. Since the Gyan language model is independent of content, it cannot be used to generate misinformation or be manipulated with biased training data.

Venkat Srinivasan, Founder of GyanAI and Serial AI Entrepreneur, said: “Our core objective is to provide an interpretable, robust capability for machines to understand natural language as close as possible to the way humans do. We have re-defined what a ‘language model’ can be. We are principally motivated by a desire to efficiently acquire and apply, automatically where possible, insights from the enormous amount of information we have access to in various fields. GyanAI can be used in conjunction with LLMs where beneficial. While GyanAI is ready for purpose-specific enterprise deployment and is in production use, we anticipate announcing API for application developers in the near future.”

From Search to Knowledge
Gartner reports that there are over one billion knowledge workers globally and growing rapidly. Typically, they rely on search engines based on online content and/or internal documents. However, today’s search engines are designed to provide only ‘access’ to information. Even in access, they can generate substantial false positives and negatives. GyanAI supports the full knowledge acquisition process as a user traverses from ‘search’ to ‘knowledge’.

Nitin Nohria, former Dean of Harvard Business School commented: “Large language models have opened up our imaginations on how AI will soon become ubiquitous in its use. Although answers that emerge from black box LLMs may be satisfactory in several use cases, there will be many others where a full understanding of how we arrived at the answers will really ‘matter’. Most professional and knowledge work will require explainable AI. And that’s what makes GyanAI such a promising technology. GyanAI is showing us a whole new way to realize the immense possibilities of AI to transform knowledge work and unleash a new era of unprecedented innovation.”

M.S. Vijay Kumar, Senior Advisor to VP, Open Learning, and former Associate Dean of Open Learning at the Massachusetts Institute of Technology added:
“GyanAI’s capabilities, that I have seen, present important opportunities for purposeful, lifelong learning. For example, GyanAI can be used to rapidly assemble relevant content collections into stackable micro-learning units and customized learning pathways directed towards different competencies – thus dynamically connecting learners to labor market opportunities and societal needs. I consider GyanAI’s ability to keep knowledge fresh and updated through continuous knowledge discovery to be a critical element of the infrastructure to support continuous learning and research.”

About Gyan
Founded in 2017, GyanAI is the world’s first explainable language model and research engine. Based on novel technology and founded by a seasoned, world-class team, its auto-curating, self-organizing research engine attempts to understand language the way humans do. GyanAI supports the full knowledge acquisition or research process to drive value by discovering meaningful and relevant knowledge.  

Join the waiting list and discover more at www.gyanai.com

  • Instagram: https://www.instagram.com/_gyanai_/
  • Twitter: https://twitter.com/GyanAI_Platform
  • LinkedIn: https://www.linkedin.com/company/74948004/admin/?feedType=following
  • Facebook: https://www.facebook.com/GyanAIplatform

For further information, please contact:

Gutenberg Communications:
Tom Geiser
thomas@thegutenberg.com
+1 314 412-6051

Keeret Singh Heer
Keeret@thegutenberg.com
(917) 940-3294


GyanAI Launches the World’s First Explainable Language Model and Research Engine

THỦ THUẬT HAY

Cách copy nhạc vào iPhone không cần iTunes

Nhiều người sử dụng sản phẩm di động của Apple cảm thấy khá bực mình vì phải phụ thuộc vào iTunes để quản lý nhạc trong iPhone, mặc dù iTunes là một cách hay để quản lý thư viện nhạc của bạn. Vì vậy, bài viết sau sẽ

Chụp ảnh xóa phông tuyệt đẹp trên cả những chiếc iPhone chỉ có một camera

Những mẫu iPhone có camera kép như iPhone 7 Plus, iPhone 8 Plus hay iPhone X đều có khả năng chụp ảnh chân dung xóa phông ở tính năng Portrait Mode.

Cách thiết lập Apple Maps mặc định chỉ đường khi đi bộ

Thông thường chế độ giao thông mặc định trong Apple Maps được thiết lập chỉ đường khi lái xe, nhưng chỉ với một tùy chỉnh đơn giản, bạn có thể điều chỉnh trải nghiệm Apple Maps mặc định thành phương tiện mà bạn hay sử

Cách kích hoạt chế độ chụp ảnh Portrait Lightning trên iPhone 7 Plus

Chế độ chụp ảnh chân dung (Portrait Lightning) hiện nay chỉ hỗ trợ iPhone X và iPhone 8 Plus - Tuy nhiên với ứng dụng mới, chỉ cần sử dụng iPhone 5s, chạy iOS 11 trở lên cũng làm được. Ngoài ra hoàn toàn không cần

Cách tạo trình bảo vệ màn hình trên iPhone và iPad cực kỳ thú vị

Screensaver (trình bảo vệ màn hình) là một tính năng bảo vệ cho màn hình của máy tính tránh khỏi lỗi burn-in do hiển thị ánh sáng quá lâu, có thể tự động kích hoạt khi thiết bị ngừng sử dụng trong một khoảng

ĐÁNH GIÁ NHANH

Đánh giá ZTE Blade L5 Plus – Smartphone ấn tượng giá dưới 1.5 triệu đồng

Mời các bạn cùng tìm hiểu bài đánh giá ZTE Blade L5 Plus, chiếc smartphone khá phù hợp với học sinh, sinh viên và những người có thu nhập thấp.

Trên tay tai nghe B&O tặng kèm LG V30: Thiết kế đẹp mắt, chất âm tốt, sản xuất tại Việt Nam

Đây không phải lần đầu tiên, LG trang bị tai nghe hàng hiệu B&O cho sản phẩm của hãng. Bởi trước đó, mẫu LG V20 cũng sử dụng loại tai nghe cao cấp này, theo mình đánh giá, ở thế hệ thứ 2 này thiết kế và chất lượng

AirPods 3 có gì mới để bạn bỏ ra hơn 4 triệu đồng mua về?

Mới đây, Apple đã cho ra mắt tai nghe AirPods thế hệ thứ 3 – AirPods 3 với giá từ 4,07 triệu đồng. Vậy, ở lượt nâng cấp này, AirPods 3 có gì mới để bạn bỏ ra hơn 4 triệu đồng mua về. Mời bạn cùng theo dõi bài viết dưới