Notable and Interesting Recent AI News, Articles, and Papers for Tuesday, July 23, 2024

posted in: AI | 0

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

OpenAI Slashes the Cost of Using Its AI With a ‘Mini’ Model | WIRED

(Thursday, July 18, 2024) “With competing models—including many free ones—flooding the market, OpenAI is announcing a cheaper way to use its AI.”

AI in Context: Cloudera Accelerates AI ROI with Verta Acquisition – The Futurum Group

Author: Dr. Bob Sutor

“Learn why Cloudera’™s acquisition of Verta was a smart move to extend its AI capabilities and accelerate customer AI implementation ROI.”

Technical Papers and Preprints

[2407.15160] When Can Transformers Count to n?

Authors: Yehudai, Gilad; Kaplan, Haim; Ghandeharioun, Asma; Geva, Mor; Globerson, Amir

arXiv logo(Sunday, July 21, 2024) “Large language models based on the transformer architectures can solve highly complex tasks. But are there simple tasks that such models cannot solve? Here we focus on very simple counting tasks, that involve counting how many times a token in the vocabulary have appeared in a string. We show that if the dimension of the transformer state is linear in the context length, this task can be solved. However, the solution we propose does not scale beyond this limit, and we provide theoretical arguments for why it is likely impossible for a size limited transformer to implement this task. Our empirical results demonstrate the same phase-transition in performance, as anticipated by the theoretical argument. Our results demonstrate the importance of understanding how transformers can solve simple tasks.”

[2407.15671] Problems in AI, their roots in philosophy, and implications for science and society

Authors: Velthoven, Max; Marcus, Eric

arXiv logo(Monday, July 22, 2024) “Artificial Intelligence (AI) is one of today’s most relevant emergent technologies. In view thereof, this paper proposes that more attention should be paid to the philosophical aspects of AI technology and its use. It is argued that this deficit is generally combined with philosophical misconceptions about the growth of knowledge. To identify these misconceptions, reference is made to the ideas of the philosopher of science Karl Popper and the physicist David Deutsch. The works of both thinkers aim against mistaken theories of knowledge, such as inductivism, empiricism, and instrumentalism. This paper shows that these theories bear similarities to how current AI technology operates. It also shows that these theories are very much alive in the (public) discourse on AI, often called Bayesianism. In line with Popper and Deutsch, it is proposed that all these theories are based on mistaken philosophies of knowledge. This includes an analysis of the implications of these mistaken philosophies for the use of AI in science and society, including some of the likely problem situations that will arise. This paper finally provides a realistic outlook on Artificial General Intelligence (AGI) and three propositions on A(G)I and philosophy (i.e., epistemology).”

[2407.15847] LLMmap: Fingerprinting For Large Language Models

Authors: Pasquini, Dario; Kornaropoulos, Evgenios M.; Ateniese, Giuseppe

arXiv logo(Monday, July 22, 2024) “We introduce LLMmap, a first-generation fingerprinting attack targeted at LLM-integrated applications. LLMmap employs an active fingerprinting approach, sending carefully crafted queries to the application and analyzing the responses to identify the specific LLM model in use. With as few as 8 interactions, LLMmap can accurately identify LLMs with over 95% accuracy. More importantly, LLMmap is designed to be robust across different application layers, allowing it to identify LLMs operating under various system prompts, stochastic sampling hyperparameters, and even complex generation frameworks such as RAG or Chain-of-Thought.”