Pinterest Researchers Present an Effective Scalable Algorithm to Improve Diffusion Models Using Reinforcement Learning (RL)

Diffusion models are a set of generative models that work by adding noise to the training data and then learn to recover the same by reversing the noising process. This process allows these models to achieve state-of-the-art image quality, making them one of the most significant developments in Machine Learning (ML) in the past few…

Meet Graph-Mamba: A Novel Graph Model that Leverages State Space Models SSM for Efficient Data-Dependent Context Selection

Graph Transformers need help with scalability in graph sequence modeling due to high computational costs, and existing attention sparsification methods fail to adequately address data-dependent contexts. State space models (SSMs) like Mamba are effective and efficient in modeling long-range dependencies in sequential data, but adapting them to non-sequential graph data is challenging. Many sequence models…

‘Weak-to-Strong JailBreaking Attack’: An Efficient AI Method to Attack Aligned LLMs to Produce Harmful Text

Well-known Large Language Models (LLMs) like ChatGPT and Llama have recently advanced and shown incredible performance in a number of Artificial Intelligence (AI) applications. Though these models have demonstrated capabilities in tasks like content generation, question answering, text summarization, etc, there are concerns regarding possible abuse, such as disseminating false information and assistance for illegal…

Advancing Vision-Language Models: A Survey by Huawei Technologies Researchers in Overcoming Hallucination Challenges

The emergence of Large Vision-Language Models (LVLMs) characterizes the intersection of visual perception and language processing. These models, which interpret visual data and generate corresponding textual descriptions, represent a significant leap towards enabling machines to see and describe the world around us with nuanced understanding akin to human perception. A notable challenge that impedes their…

This AI Paper from Apple Unpacks the Trade-Offs in Language Model Training: Finding the Sweet Spot Between Pretraining, Specialization, and Inference Budgets

There’s been a significant shift towards creating powerful and pragmatically deployable models in varied contexts. This narrative centers on the intricate balance between developing expansive language models imbued with the capacity for deep understanding and generation of human language and the practical considerations of deploying these models efficiently, especially in environments constrained by computational resources….

This AI Paper Proposes Infini-Gram: A Groundbreaking Approach to Scale and Enhance N-Gram Models Beyond Traditional Limits

Pretrained on trillion-token corpora, large neural language models (LLMs) have achieved remarkable performance strides (Touvron et al., 2023a; Geng & Liu, 2023). However, the scalability benefits of such data for traditional n-gram language models (LMs) still need to be explored. This paper from the University of Washington and Allen Institute for Artificial Intelligence delves into…

Outsmarting Uncertainty: How ‘K-Level Reasoning’ from Microsoft Research is Setting New Standards for LLMs

Delving into the intricacies of artificial intelligence, particularly within the dynamic reasoning domain, uncovers the pivotal role of Large Language Models (LLMs) in navigating environments that are not just complex but ever-changing. While effective in predictable settings, traditional static reasoning models falter when faced with the unpredictability inherent in real-world scenarios such as market fluctuations…

LLMWare Launches SLIMs: Small Specialized Function-Calling Models for Multi-Step Automation

As enterprises look to deploy LLMs in more complex production use cases beyond simple knowledge assistants, there is a growing recognition of three interconnected needs:   Agents – complex workflows involve multiple steps and require the orchestration of multiple LLM calls; Function Calls – models need to be able to generate structured output that can be…

Nomic AI Introduces Nomic Embed: Text Embedding Model with an 8192 Context-Length that Outperforms OpenAI Ada-002 and Text-Embedding-3-Small on both Short and Long Context Tasks

Nomic AI released an embedding model with a multi-stage training pipeline, Nomic Embed, an open-source, auditable, and high-performing text embedding model. It also has an extended context length supporting tasks such as retrieval-augmented-generation (RAG) and semantic search. The existing popular models, including OpenAI’s text-embedding-ada-002, lack openness and auditability. The model addresses the challenge of developing…

Can Large Language Models be Trusted for Evaluation? Meet SCALEEVAL: An Agent-Debate-Assisted Meta-Evaluation Framework that Leverages the Capabilities of Multiple Communicative LLM Agents

Despite the utility of large language models (LLMs) across various tasks and scenarios, researchers need help to evaluate LLMs properly in different situations. They use LLMs to check their responses, but a solution must be found. This method is limited because there aren’t enough benchmarks, and it often requires a lot of human input. They…