Notable and Interesting Recent AI News, Articles, and Papers for Thursday, July 18, 2024

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

IBM text-to-SQL generator tops leaderboard – IBM Research

(Tuesday, July 02, 2024) “IBM’s generative AI solution takes a top spot on the BIRD benchmark for handling complex database queries”

Reaffirming IBM’s commitment to the Rome Call for AI ethics – IBM Research

(Monday, July 15, 2024) “IBM joined representatives from many of the world’s major religions in Japan to discuss ethical AI development.”

AMD takes a deep dive into architecture for the AI PC chips | VentureBeat

Author: Dean Takahashi

(Monday, July 15, 2024) “Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Advanced Micro Devices executives revealed the details of the chipmaker’s latest AI PC architecture, which includes a new neural processing unit (NPU) in the company’s latest AMD Ryzen AI chips. The company announced the latest AMD Ryzen […]”

MathΣtral | Mistral AI | Frontier AI in your hands

(Tuesday, July 16, 2024) “As a tribute to Archimedes, whose 2311th anniversary we’re celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific discovery. The model has a 32k context window published under the Apache 2.0 license.”

AI in gaming: Developers worried by generative tech

“In a struggling games industry AI has been hailed as a possible saviour. But not everyone’s convinced.”

Technical Papers and Preprints

[2407.12690] The Dual Imperative: Innovation and Regulation in the AI Era

Author: Carvão, Paulo

arXiv logo(Thursday, May 23, 2024) “This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence and proposes a framework combining innovation and regulation. Over fifty years of AI research, catalyzed by declining computing costs and the proliferation of data, have propelled AI into the mainstream, promising significant economic benefits. Yet, this rapid adoption underscores risks, from bias amplification and labor disruptions to existential threats posed by autonomous systems. The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes. This piece advocates for a middle path that leverages technical innovation and smart regulation to maximize the benefits of AI while minimizing its risks, offering a pragmatic approach to the responsible progress of AI technology. Technical invention beyond the most capable foundation models is needed to contain catastrophic risks. Regulation is required to create incentives for this research while addressing current issues.”

[2407.12043] The Art of Saying No: Contextual Noncompliance in Language Models

Authors: Brahman, Faeze; Kumar, Sachin; Balachandran, Vidhisha; Dasigi, Pradeep; Pyatkin, Valentina; Ravichander, Abhilasha; Wiegreffe, Sarah; Dziri, Nouha; Chandu, Khyathi; Hessel, Jack; Tsvetkov, Yulia; Smith, Noah A.; Choi, Yejin; Hajishirzi, Hannaneh

arXiv logo(Tuesday, July 02, 2024) “Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of “unsafe” queries, we posit that the scope of noncompliance should be broadened. We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should not comply with user requests. Our taxonomy spans a wide range of categories including incomplete, unsupported, indeterminate, and humanizing requests (in addition to unsafe requests). To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts. We find that most existing models show significantly high compliance rates in certain previously understudied categories with models like GPT-4 incorrectly complying with as many as 30% of requests. To address these gaps, we explore different training strategies using a synthetically-generated training set of requests and expected noncompliant responses. Our experiments demonstrate that while direct finetuning of instruction-tuned models can lead to both over-refusal and a decline in general capabilities, using parameter efficient methods like low rank adapters helps to strike a good balance between appropriate noncompliance and other capabilities.”

 

Verified by MonsterInsights