Notable and Interesting Recent AI News, Articles, and Papers for Monday, July 15, 2024

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

Developers get by with a little help from AI: Stack Overflow Knows code assistant pulse survey results – Stack Overflow

Gen AI and beyond: Where else to focus now | McKinsey

(Friday, July 12, 2024) “Yes, gen AI can be dazzling. But to deliver value, leaders will have to look beyond center stage.”

Designing for Education with Artificial Intelligence: An Essential Guide for Developers – Office of Educational Technology

“Informing product leads and their teams of innovators, designers, and developers as they work toward safety, security, and trust while creating AI products and services for use in education.”

IBM’s AI, Open-Source Granite Models & Sports Technology – The Futurum Group

Author: Steven Dickens

“Chief Technology Advisor Steven Dickens shares insights on how IBM uses AI to enhance sports, democratizing innovation through open-source.”

Technical Papers and Preprints

[2407.08488] Lynx: An Open Source Hallucination Evaluation Model

Authors: Ravi, Selvan Sunitha; Mielczarek, Bartosz; Kannappan, Anand; Kiela, Douwe; Qian, Rebecca

arXiv logo(Thursday, July 11, 2024) “Retrieval Augmented Generation (RAG) techniques aim to mitigate hallucinations in Large Language Models (LLMs). However, LLMs can still produce information that is unsupported or contradictory to the retrieved contexts. We introduce LYNX, a SOTA hallucination detection LLM that is capable of advanced reasoning on challenging real-world hallucination scenarios. To evaluate LYNX, we present HaluBench, a comprehensive hallucination evaluation benchmark, consisting of 15k samples sourced from various real-world domains. Our experiment results show that LYNX outperforms GPT-4o, Claude-3-Sonnet, and closed and open-source LLM-as-a-judge models on HaluBench. We release LYNX, HaluBench and our evaluation code for public access.”

[2407.08105] Federated Learning and AI Regulation in the European Union: Who is Responsible? — An Interdisciplinary Analysis

Authors: Woisetschläger, Herbert; Mertel, Simon; Krönke, Christoph; Mayer, Ruben; Jacobsen, Hans-Arno

arXiv logo(Thursday, July 11, 2024) “The European Union Artificial Intelligence Act mandates clear stakeholder responsibilities in developing and deploying machine learning applications to avoid substantial fines, prioritizing private and secure data processing with data remaining at its origin. Federated Learning (FL) enables the training of generative AI Models across data siloes, sharing only model parameters while improving data security. Since FL is a cooperative learning paradigm, clients and servers naturally share legal responsibility in the FL pipeline. Our work contributes to clarifying the roles of both parties, explains strategies for shifting responsibilities to the server operator, and points out open technical challenges that we must solve to improve FL’s practical applicability under the EU AI Act.”

 

Verified by MonsterInsights