Notable and Interesting Recent AI News, Articles, and Papers for Tuesday, July 23, 2024

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

OpenAI Slashes the Cost of Using Its AI With a ‘Mini’ Model | WIRED

(Thursday, July 18, 2024) “With competing models—including many free ones—flooding the market, OpenAI is announcing a cheaper way to use its AI.”

AI in Context: Cloudera Accelerates AI ROI with Verta Acquisition – The Futurum Group

Author: Dr. Bob Sutor

“Learn why Cloudera’™s acquisition of Verta was a smart move to extend its AI capabilities and accelerate customer AI implementation ROI.”

Technical Papers and Preprints

[2407.15160] When Can Transformers Count to n?

Authors: Yehudai, Gilad; Kaplan, Haim; Ghandeharioun, Asma; Geva, Mor; Globerson, Amir

arXiv logo(Sunday, July 21, 2024) “Large language models based on the transformer architectures can solve highly complex tasks. But are there simple tasks that such models cannot solve? Here we focus on very simple counting tasks, that involve counting how many times a token in the vocabulary have appeared in a string. We show that if the dimension of the transformer state is linear in the context length, this task can be solved. However, the solution we propose does not scale beyond this limit, and we provide theoretical arguments for why it is likely impossible for a size limited transformer to implement this task. Our empirical results demonstrate the same phase-transition in performance, as anticipated by the theoretical argument. Our results demonstrate the importance of understanding how transformers can solve simple tasks.”

[2407.15671] Problems in AI, their roots in philosophy, and implications for science and society

Authors: Velthoven, Max; Marcus, Eric

arXiv logo(Monday, July 22, 2024) “Artificial Intelligence (AI) is one of today’s most relevant emergent technologies. In view thereof, this paper proposes that more attention should be paid to the philosophical aspects of AI technology and its use. It is argued that this deficit is generally combined with philosophical misconceptions about the growth of knowledge. To identify these misconceptions, reference is made to the ideas of the philosopher of science Karl Popper and the physicist David Deutsch. The works of both thinkers aim against mistaken theories of knowledge, such as inductivism, empiricism, and instrumentalism. This paper shows that these theories bear similarities to how current AI technology operates. It also shows that these theories are very much alive in the (public) discourse on AI, often called Bayesianism. In line with Popper and Deutsch, it is proposed that all these theories are based on mistaken philosophies of knowledge. This includes an analysis of the implications of these mistaken philosophies for the use of AI in science and society, including some of the likely problem situations that will arise. This paper finally provides a realistic outlook on Artificial General Intelligence (AGI) and three propositions on A(G)I and philosophy (i.e., epistemology).”

[2407.15847] LLMmap: Fingerprinting For Large Language Models

Authors: Pasquini, Dario; Kornaropoulos, Evgenios M.; Ateniese, Giuseppe

arXiv logo(Monday, July 22, 2024) “We introduce LLMmap, a first-generation fingerprinting attack targeted at LLM-integrated applications. LLMmap employs an active fingerprinting approach, sending carefully crafted queries to the application and analyzing the responses to identify the specific LLM model in use. With as few as 8 interactions, LLMmap can accurately identify LLMs with over 95% accuracy. More importantly, LLMmap is designed to be robust across different application layers, allowing it to identify LLMs operating under various system prompts, stochastic sampling hyperparameters, and even complex generation frameworks such as RAG or Chain-of-Thought.”

 

Notable and Interesting Recent AI News, Articles, and Papers for Thursday, July 18, 2024

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

IBM text-to-SQL generator tops leaderboard – IBM Research

(Tuesday, July 02, 2024) “IBM’s generative AI solution takes a top spot on the BIRD benchmark for handling complex database queries”

Reaffirming IBM’s commitment to the Rome Call for AI ethics – IBM Research

(Monday, July 15, 2024) “IBM joined representatives from many of the world’s major religions in Japan to discuss ethical AI development.”

AMD takes a deep dive into architecture for the AI PC chips | VentureBeat

Author: Dean Takahashi

(Monday, July 15, 2024) “Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Advanced Micro Devices executives revealed the details of the chipmaker’s latest AI PC architecture, which includes a new neural processing unit (NPU) in the company’s latest AMD Ryzen AI chips. The company announced the latest AMD Ryzen […]”

MathΣtral | Mistral AI | Frontier AI in your hands

(Tuesday, July 16, 2024) “As a tribute to Archimedes, whose 2311th anniversary we’re celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific discovery. The model has a 32k context window published under the Apache 2.0 license.”

AI in gaming: Developers worried by generative tech

“In a struggling games industry AI has been hailed as a possible saviour. But not everyone’s convinced.”

Technical Papers and Preprints

[2407.12690] The Dual Imperative: Innovation and Regulation in the AI Era

Author: Carvão, Paulo

arXiv logo(Thursday, May 23, 2024) “This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence and proposes a framework combining innovation and regulation. Over fifty years of AI research, catalyzed by declining computing costs and the proliferation of data, have propelled AI into the mainstream, promising significant economic benefits. Yet, this rapid adoption underscores risks, from bias amplification and labor disruptions to existential threats posed by autonomous systems. The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes. This piece advocates for a middle path that leverages technical innovation and smart regulation to maximize the benefits of AI while minimizing its risks, offering a pragmatic approach to the responsible progress of AI technology. Technical invention beyond the most capable foundation models is needed to contain catastrophic risks. Regulation is required to create incentives for this research while addressing current issues.”

[2407.12043] The Art of Saying No: Contextual Noncompliance in Language Models

Authors: Brahman, Faeze; Kumar, Sachin; Balachandran, Vidhisha; Dasigi, Pradeep; Pyatkin, Valentina; Ravichander, Abhilasha; Wiegreffe, Sarah; Dziri, Nouha; Chandu, Khyathi; Hessel, Jack; Tsvetkov, Yulia; Smith, Noah A.; Choi, Yejin; Hajishirzi, Hannaneh

arXiv logo(Tuesday, July 02, 2024) “Chat-based language models are designed to be helpful, yet they should not comply with every user request. While most existing work primarily focuses on refusal of “unsafe” queries, we posit that the scope of noncompliance should be broadened. We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should not comply with user requests. Our taxonomy spans a wide range of categories including incomplete, unsupported, indeterminate, and humanizing requests (in addition to unsafe requests). To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts. We find that most existing models show significantly high compliance rates in certain previously understudied categories with models like GPT-4 incorrectly complying with as many as 30% of requests. To address these gaps, we explore different training strategies using a synthetically-generated training set of requests and expected noncompliant responses. Our experiments demonstrate that while direct finetuning of instruction-tuned models can lead to both over-refusal and a decline in general capabilities, using parameter efficient methods like low rank adapters helps to strike a good balance between appropriate noncompliance and other capabilities.”

 

Notable and Interesting Recent AI News, Articles, and Papers for Monday, July 15, 2024

A selection of the most important recent news, articles, and papers about AI.


Image of a futuristic AI data center

News, Articles, and Analyses

Developers get by with a little help from AI: Stack Overflow Knows code assistant pulse survey results – Stack Overflow

Gen AI and beyond: Where else to focus now | McKinsey

(Friday, July 12, 2024) “Yes, gen AI can be dazzling. But to deliver value, leaders will have to look beyond center stage.”

Designing for Education with Artificial Intelligence: An Essential Guide for Developers – Office of Educational Technology

“Informing product leads and their teams of innovators, designers, and developers as they work toward safety, security, and trust while creating AI products and services for use in education.”

IBM’s AI, Open-Source Granite Models & Sports Technology – The Futurum Group

Author: Steven Dickens

“Chief Technology Advisor Steven Dickens shares insights on how IBM uses AI to enhance sports, democratizing innovation through open-source.”

Technical Papers and Preprints

[2407.08488] Lynx: An Open Source Hallucination Evaluation Model

Authors: Ravi, Selvan Sunitha; Mielczarek, Bartosz; Kannappan, Anand; Kiela, Douwe; Qian, Rebecca

arXiv logo(Thursday, July 11, 2024) “Retrieval Augmented Generation (RAG) techniques aim to mitigate hallucinations in Large Language Models (LLMs). However, LLMs can still produce information that is unsupported or contradictory to the retrieved contexts. We introduce LYNX, a SOTA hallucination detection LLM that is capable of advanced reasoning on challenging real-world hallucination scenarios. To evaluate LYNX, we present HaluBench, a comprehensive hallucination evaluation benchmark, consisting of 15k samples sourced from various real-world domains. Our experiment results show that LYNX outperforms GPT-4o, Claude-3-Sonnet, and closed and open-source LLM-as-a-judge models on HaluBench. We release LYNX, HaluBench and our evaluation code for public access.”

[2407.08105] Federated Learning and AI Regulation in the European Union: Who is Responsible? — An Interdisciplinary Analysis

Authors: Woisetschläger, Herbert; Mertel, Simon; Krönke, Christoph; Mayer, Ruben; Jacobsen, Hans-Arno

arXiv logo(Thursday, July 11, 2024) “The European Union Artificial Intelligence Act mandates clear stakeholder responsibilities in developing and deploying machine learning applications to avoid substantial fines, prioritizing private and secure data processing with data remaining at its origin. Federated Learning (FL) enables the training of generative AI Models across data siloes, sharing only model parameters while improving data security. Since FL is a cooperative learning paradigm, clients and servers naturally share legal responsibility in the FL pipeline. Our work contributes to clarifying the roles of both parties, explains strategies for shifting responsibilities to the server operator, and points out open technical challenges that we must solve to improve FL’s practical applicability under the EU AI Act.”

 

Notable and Interesting Recent Quantum News, Articles, and Papers for Saturday, July 13, 2024

A selection of the most important recent news, articles, and papers about quantum computing.

Image of a cube-shaped futuristic quantum computer

News and Articles

A breakthrough on the edge: One step closer to topological quantum computing

(Wednesday, July 10, 2024) “Researchers at the University of Cologne have achieved a significant breakthrough in quantum materials, potentially setting the stage for advancements in topological superconductivity and robust quantum computing / publication in ‘Nature Physics’”

Partnership boosts UK access to most powerful quantum technologies – UKRI

(Thursday, July 11, 2024) “UK industry and researchers will gain unparalleled access to the world’s most powerful quantum computers.”

Bob Sutor; Vice President and Practice Lead, Emerging Technologies, The Futurum Group will speak at IQT Quantum + AI in New York City October 29-30 – Inside Quantum Technology

(Friday, July 12, 2024) “Bob Sutor; Vice President and Practice Lead, Emerging Technologies, The Futurum Group will speak at IQT Quantum + AI in New York City October 29-30. Dr. Bob Sutor has been a technical leader and executive in the IT industry for over 40 years. He is a theoretical mathematician by training, with a Ph.D. from Princeton”

Technical Papers and Preprints

[2406.17653] Algorithmic Fault Tolerance for Fast Quantum Computing

arXiv logo(Tuesday, June 25, 2024) “Fast, reliable logical operations are essential for the realization of useful quantum computers, as they are required to implement practical quantum algorithms at large scale. By redundantly encoding logical qubits into many physical qubits and using syndrome measurements to detect and subsequently correct errors, one can achieve very low logical error rates. However, for most practical quantum error correcting (QEC) codes such as the surface code, it is generally believed that due to syndrome extraction errors, multiple extraction rounds — on the order of the code distance d — are required for fault-tolerant computation. Here, we show that contrary to this common belief, fault-tolerant logical operations can be performed with constant time overhead for a broad class of QEC codes, including the surface code with magic state inputs and feed-forward operations, to achieve “algorithmic fault tolerance”. Through the combination of transversal operations and novel strategies for correlated decoding, despite only having access to partial syndrome information, we prove that the deviation from the ideal measurement result distribution can be made exponentially small in the code distance. We supplement this proof with circuit-level simulations in a range of relevant settings, demonstrating the fault tolerance and competitive performance of our approach. Our work sheds new light on the theory of fault tolerance, potentially reducing the space-time cost of practical fault-tolerant quantum computation by orders of magnitude.”

[2407.02553] Large-scale quantum reservoir learning with an analog quantum computer

arXiv logo(Tuesday, July 02, 2024) “Quantum machine learning has gained considerable attention as quantum technology advances, presenting a promising approach for efficiently learning complex data patterns. Despite this promise, most contemporary quantum methods require significant resources for variational parameter optimization and face issues with vanishing gradients, leading to experiments that are either limited in scale or lack potential for quantum advantage. To address this, we develop a general-purpose, gradient-free, and scalable quantum reservoir learning algorithm that harnesses the quantum dynamics of neutral-atom analog quantum computers to process data. We experimentally implement the algorithm, achieving competitive performance across various categories of machine learning tasks, including binary and multi-class classification, as well as timeseries prediction. Effective and improving learning is observed with increasing system sizes of up to 108 qubits, demonstrating the largest quantum machine learning experiment to date. We further observe comparative quantum kernel advantage in learning tasks by constructing synthetic datasets based on the geometric differences between generated quantum and classical data kernels. Our findings demonstrate the potential of utilizing classically intractable quantum correlations for effective machine learning. We expect these results to stimulate further extensions to different quantum hardware and machine learning paradigms, including early fault-tolerant hardware and generative machine learning tasks.”

[2407.07202] Quantum Approximate Optimization: A Computational Intelligence Perspective

arXiv logo(Tuesday, July 09, 2024) “Quantum computing is an emerging field on the multidisciplinary interface between physics, engineering, and computer science with the potential to make a large impact on computational intelligence (CI). The aim of this paper is to introduce quantum approximate optimization methods to the CI community because of direct relevance to solving combinatorial problems. We introduce quantum computing and variational quantum algorithms (VQAs). VQAs are an effective method for the near-term implementation of quantum solutions on noisy intermediate-scale quantum (NISQ) devices with less reliable qubits and early-stage error correction. Then, we explain Farhi et al.’s quantum approximate optimization algorithm (Farhi’s QAOA, to prevent confusion). This VQA is generalized by Hadfield et al. to the quantum alternating operator ansatz (QAOA), which is a nature-inspired (particularly, adiabatic) quantum metaheuristic for approximately solving combinatorial optimization problems on gate-based quantum computers. We discuss connections of QAOA to relevant domains, such as computational learning theory and genetic algorithms, discussing current techniques and known results regarding hybrid quantum-classical intelligence systems. We present a schematic of how QAOA is constructed, and also discuss how CI techniques can be used to improve QAOA. We conclude with QAOA implementations for the well-known maximum cut, maximum bisection, and traveling salesperson problems, which can serve as templates for CI practitioners interested in using QAOA.”

[2407.07694] Scalable, high-fidelity all-electronic control of trapped-ion qubits

arXiv logo(Wednesday, July 10, 2024) “The central challenge of quantum computing is implementing high-fidelity quantum gates at scale. However, many existing approaches to qubit control suffer from a scale-performance trade-off, impeding progress towards the creation of useful devices. Here, we present a vision for an electronically controlled trapped-ion quantum computer that alleviates this bottleneck. Our architecture utilizes shared current-carrying traces and local tuning electrodes in a microfabricated chip to perform quantum gates with low noise and crosstalk regardless of device size. To verify our approach, we experimentally demonstrate low-noise site-selective single- and two-qubit gates in a seven-zone ion trap that can control up to 10 qubits. We implement electronic single-qubit gates with 99.99916(7

 

Notable and Interesting Recent AI News, Articles, and Papers for Thursday, July 11, 2024

A selection of the most important recent news and articles about AI.

Image of a futuristic AI data center

Enabling Quantum Computing with AI | NVIDIA Technical Blog

(Sunday, May 12, 2024) “Building a useful quantum computer in practice is incredibly challenging. Significant improvements are needed in the scale, fidelity, speed, reliability, and programmability of quantum computers to…”

The Words That Give Away Generative AI Text | WIRED

(Sunday, July 07, 2024) “From ‘delves’ to ‘showcasing,’ certain words boomed in usage after LLMs became mainstream.”

Top 5 potential uses, pitfalls for generative AI in federal government

(Monday, July 08, 2024) “We believe Multi-Agent Systems are the only viable approach to bringing generative AI into the U.S. government in a managed manner.”

 

Notable Recent Quantum News, Articles, and Papers for Thursday, July 11, 2024

A selection of the most important recent news and articles about #quantumcomputing.


Futuristic quantum computer

Fourier Quantum Process Tomography | npj Quantum Information

(Thursday, May 09, 2024) “The characterization of a quantum device is a crucial step in the development of quantum experiments. This is accomplished via Quantum Process Tomography, which combines the outcomes of different projective measurements to deliver a possible reconstruction of the underlying process. The tomography is typically performed by processing an overcomplete set of measurements and extracting the process matrix from maximum-likelihood estimation. Here, we introduce Fourier Quantum Process Tomography, a technique which requires a reduced number of measurements, and benchmark its performance against the standard maximum-likelihood approach. Fourier Quantum Process Tomography is based on measuring probability distributions in two conjugate spaces for different state preparations and projections. Exploiting the concept of phase retrieval, our scheme achieves a complete and robust characterization of the setup by processing a near-minimal set of measurements. We experimentally test the technique on different space-dependent polarization transformations, reporting average fidelities higher than 90% and significant computational advantage.”

Enabling Quantum Computing with AI | NVIDIA Technical Blog

(Sunday, May 12, 2024) “Building a useful quantum computer in practice is incredibly challenging. Significant improvements are needed in the scale, fidelity, speed, reliability, and programmability of quantum computers to…”

Kipu Quantum Acquires Quantum Computing Platform Built by Anaqor AG to Accelerate Development of Industrially Relevant Quantum Solutions

(Thursday, July 11, 2024) “/PRNewswire/ — Kipu Quantum, the worldwide leading quantum software company, announced today the strategic acquisition of PlanQK, the German quantum computing…”

Simulating the universe’s most extreme environments | IBM Quantum Computing Blog

“Scalable techniques for quantum simulations of high-energy physics.”

Quantum in Context: Quantum Companies Rotate in New Leaders – The Futurum Group

“Learn which quantum computing companies have recently replaced their CEOs & reasons Boards of Directors make such changes.”

EDF, Alice & Bob, Quandela and CNRS Partner to Optimize Quantum Computing’s Energy Efficiency

“PARIS, July 10, 2024 — French electric utility company EDF, in collaboration with quantum computing firms Quandela and Alice & Bob, and the French National Centre for Scientific Research (CNRS), has […]”

Study of Quantum Computing Energy Efficiency – The Futurum Group

“Learn about a study in France that will look at the energy efficiency of quantum computing systems versus HPC for well-known algorithms.”

Oxford Ionics breaks global quantum performance records

“Oxford Ionics has demonstrated the highest performing quantum chip in the world, which can be produced at scale in a standard semiconductor fabrication plant.”

 

Notable and Interesting Recent Quantum News, Articles, and Papers for Tuesday, July 9, 2024

Futuristic quantum computer

planqc Announces €50 Million Series A

(Monday, July 08, 2024) “Digital atom-based quantum computing company planqc announced that the company has secured €50 million financing in a Series A round.”

Zapata AI and D-Wave Quantum Announce Expanded Partnership for Advanced Generative AI Solutions

“BOSTON and PALO ALTO, Calif., July 8, 2024 — Zapata Computing Holdings Inc., a leader in Industrial Generative AI software solutions, and D-Wave Quantum Inc., a leader in quantum computing […]”

Notable and Interesting Recent AI News, Articles, and Papers for Tuesday, July 9, 2024

Futuristic AI Data Center

Unleash developer productivity with generative AI | McKinsey

(Tuesday, June 27, 2023) “A new McKinsey study shows that software developers can complete tasks up to twice as fast with generative AI. Four actions can help maximize productivity.”

IBM Makes Generative AI Platform for DevOps Available – DevOps.com

(Tuesday, July 02, 2024) “IBM has made available IBM Concert, leveraging generative artificial intelligence and knowledge graphs to surface in real-time dependencies.”

Maintaining human oversight in AI-enhanced software development – Help Net Security

(Wednesday, July 03, 2024) “It’s not that AI-generated code introduces new security gaps; it just means that even more code will make its way through existing gaps.”

Transparency From Behind the Generative AI Curtain – The New Stack

(Friday, July 05, 2024) “The Foundational Model Transparency Index illuminates the black box of data on which large language models are trained.”

Nintendo Says Generative AI Can Be Used in ‘Creative Ways,’ but Highlights IP Issues – IGN

(Friday, July 05, 2024) “Nintendo has commented on the controversial topic of generative AI in video game development, outline the pros and cons as it sees them.”

Enterprises must stop GenAI experiments and start long-term strategies | Computer Weekly

“Steven Webb, chief technology & innovation officer, Capgemini UK argues for enterprise organisations to put aside GenAI experimentation and build long-term strategies with it.”

Gen AI and software development | Deloitte Insights

“Freeplay CEO Ian Cairns describes how the organization has adapted to the paradigm shift that generative AI demands while building AI applications”

Zapata AI and D-Wave Quantum Announce Expanded Partnership for Advanced Generative AI Solutions

“BOSTON and PALO ALTO, Calif., July 8, 2024 — Zapata Computing Holdings Inc., a leader in Industrial Generative AI software solutions, and D-Wave Quantum Inc., a leader in quantum computing […]”

Notable and Interesting Recent AI News, Articles, and Papers for Monday, July 1, 2024

Futuristic AI Data Center

France leads the pack for generative AI funding in Europe | TechCrunch

(Wednesday, June 19, 2024) “Like it or hate it, artificial intelligence — especially generative AI — is the technology story of 2024. OpenAI, with its rollouts of viral services like”

Generative AI Can’t Cite Its Sources

(Wednesday, June 26, 2024) “How will OpenAI keep its promise to media companies?”

Illia Polosukhin On Inventing The Tech Behind Generative AI At Google

(Thursday, June 27, 2024) “Illia Polosukhin is one of the “Transformer 8,” a group that many call the founding fathers of generative AI. They co-wrote a paper at Google in 2017 that es…”

How generative AI could reinvent what it means to play

“AI-powered NPCs that don’t need a script could make games—and other worlds—deeply immersive.”

Cornell transforms generative AI education and clones a faculty member | Cornell Chronicle

“Designing and Building AI Solutions is a new online certificate program, with one-of-a-kind features designed to enhance the learning experience for those that desire to build their own AI products—no coding required.”

Data science in my book Dancing with Python

In my blog entry Quantum computing in my book Dancing with Python,” I covered what my book covers related to quantum computing. I also published the entry “Availability of my book Dancing with Python and its table of contents.”

Today, I want to specifically list what I discuss in the book in what I term “an extended definition of data science.” The core chapters are in Part III. Here are their titles, introductions, and chapter tables of contents:

III Advanced Features and Libraries

12 Searching and Changing Text

We represent much of the world’s information as text. Think of all the words in all the digital newspapers, e-books, PDF files, blogs, emails, texts, and social media services such as Twitter and Facebook. Given a block of text, how do we search it to see if some desired information is present? How can we change the text to add formatting or corrections or extract information?

Chapter 4, Stringing You Along, covered Python’s functions and methods. This chapter begins with regular expressions and then proceeds to natural language processing (NLP) basics: how to go from a string of text to some of the meaning contained therein.

12.1 Core string search and replace methods
12.2 Regular expressions
12.3 Introduction to Natural Language Processing
12.4 Summary

13 Creating Plots and Charts

Among mathematicians and computer scientists, it’s said that a picture is worth 210 words. Okay, that’s a bad joke, but it’s one thing to manipulate and compute with data, but quite another to create stunning visualizations that convey useful information.

While there are many ways of building images and charts, Matplotlib is the most widely used Python library for doing so. [MAT] Matplotlib is very flexible and can produce high-quality output for print or digital media. It also has great support for a wide variety of backends
that give you powerful mouse-driven interactivity. Generally speaking, if you have a coding project and you need to visualize numeric information, see if Matplotlib already does what you want. This chapter covers the core functionality of this essential library.

13.1 Function plots
13.2 Bar charts
13.3 Histograms
13.4 Pie charts
13.5 Scatter plots
13.6 Moving to three dimensions
13.7 Summary

14 Analyzing Data

While we can use fancy names like “data science,” “analytics,” and “artificial intelligence” to talk about working with data, sometimes you just want to read, write, and process files containing many rows and columns of information. People have been doing this interactively for years, typically using applications like Microsoft Excel® and online apps like Google Sheets™.

To “programmatically” manipulate data, I mean that we use Python functions and methods. This chapter uses the popular pandas library to create and manipulate these collections of rows and columns, called DataFrames. [PAN] [PCB] We will later introduce other methods in Chapter 15, Learning, Briefly. Before we discuss DataFrames, let’s review some core ideas from statistics.

14.1 Statistics
14.2 Cats and commas
14.3 pandas DataFrames
14.4 Data cleaning
14.5 Statistics with pandas
14.6 Converting categorical data
14.7 Cats by gender in each locality
14.8 Are all tortoiseshell cats female?
14.9 Cats in trees and circles
14.10 Summary

15 Learning, Briefly

Machine learning is not new, but it and its sub-discipline, deep learning, are now being used extensively for many applications in artificial intelligence (AI). There are hundreds of academic and practical coding books about machine learning.

This final chapter introduces machine learning and neural networks primarily through the scikit-learn sklearn module. Consider this a jumping-off point where you can use the Python features you’ve learned in this book to go more deeply into these essential AI areas if they interest you.

15.1 What is machine learning?
15.2 Cats again
15.3 Feature scaling
15.4 Feature selection and reduction
15.5 Clustering
15.6 Classification
15.7 Linear regression
15.8 Concepts of neural networks
15.9 Quantum machine learning
15.10 Summary

This book is an introduction, so my goal is to get you started on a broad range of topics. For example, here are the Python modules and packages discussed or used in each of the four chapters in Part III:

12 Searching and Changing Text: re, flashtext, spacy
13 Creating Plots and Charts: matplotlib, numpy, mpl_toolkits.mplot3d
14 Analyzing Data: pandas, numpy, matplotlib, squarify, matplotlib-venn
15 Learning, Briefly: sklearn, pandas, numpy

I mention in passing in the book several other packages, such as pytorch, as pointers for further exploration. I did not include in the list above standard modules such as math, random, and sys.

Call for papers: Education, Research, and Application of Quantum Computing – HICSS 2022

Education, Research, and Application of Quantum Computing

My IBM Quantum colleague Dr. Andrew Wack and I are hosting a minitrack at the Hawaii International Conference on System Sciences (HICSS) 2022.

The description of the minitrack is:

There is no question that quantum computing will be a technology that will spur breakthroughs in natural science, AI, and computational algorithms such as those used in finance. IBM, Google, Honeywell, and several startups are working hard to create the next generation of “supercomputers” based on universal quantum technology.

What exactly is quantum computing, how does it work, how do we teach it, how do we leverage it in education and research, and what will it take to achieve these quantum breakthroughs?

The purpose of this minitrack is to bring together educators and researchers who are working to bring quantum computing into the mainstream.

We are looking for reports that

  • improve our understanding of how to integrate quantum computing into business, machine learning, computer science, and applied mathematics university curriculums,
  • describe hands-on student experiences with the open-source Qiskit quantum software development kit, and
  • extend computational techniques for business, finance, and economics from classical to quantum systems.

It is part of the Decision Analytics and Service Science track at HICSS.

Please consider submitting a report and sharing this Call for Papers with your colleagues.

The Amazon Kindle version of Dancing with Qubits is now available!

Page from Kindle version of Dancing with QubitsI’m pleased to announce that the Amazon Kindle version of my quantum computing book Dancing with Qubits is now available!

This book provides a comfortable and conversational introduction to quantum computing. I take you through the mathematics you need at a pace that allows you to understand not just “what” but also “why.” When we get to quantum computing, concepts like superposition and entanglement are shown to be natural ideas building on what we’ve already seen, and then illustrated via gates, circuits, and algorithms.

Throughout the book, I highlight important results, provide questions to answer, and give links to references where you can learn more. This allows the book to be used for self-study or as a textbook.

Important ideas like Quantum Volume are explained to give you a head start for reading more advanced texts and research papers. I provide many references to related content in math, physics, quantum computing, AI, and financial services. Dancing with Qubits concludes with questions for you to think about and ask experts so that you can gauge progress in the field over the next few years.

Features of the Kindle edition

Page from the book Dancing with Qubits

  • The text will get larger or smaller as you wish and you can change to a font that is comfortable for you to read.
  • There are links throughout the book to other sections and the references in each chapter.
  • Many of the references have links to external sources, such as arxiv or Nature for research papers.
  • The content is in color, if your Kindle device supports it.
  • You can search for terms throughout the book.
  • I’ve maximized the number of mathematical expressions that are expressed textually (see below) to improve the reading experience.

The print version of Dancing with Qubits still has the full, rich mathematical formatting, albeit in black and white. In essence, whether you choose the print or Kindle version, the content is consistent and the formatting is the best I know how to produce for each medium.

Technical Notes

Here are a few comments about the production of the Kindle version, in case you are interested.

Page from the book Dancing with Qubits

  • The original content for Dancing with Qubits is in LaTeX. From that I can produce the black and white print version, a color PDF eBook, and an epub3 file from which the Amazon Kindle and several other MOBI eBook versions are created.
  • I used make4ht and tex4ht to go from the LaTeX source files to HTML. While very powerful, the documentation is scarce and I spent many hours trying to figure how to make things work and then writing sed and Python scripts to fix things that were not quite right.
  • I wrote Python scripts to create the various files needed for epub3, such as opf and navigation, and to break the 30,000+ line HTML file into smaller XHTML files. I used tidy several times to format the HTML and XHTML.
  • The epub3 validators in several free epub3 editing apps either skipped problems entirely or gave false negatives. I found pagina EPUB-Checker to be the best software for validation.
  • I wanted to maximize the amount of HTML formatting I could use and MathML is not available in a practical sense for all eBook formats. tex4ht produced very inconsistent results. So while I could express $x_2$ as x2 in the text without extra fonts, more two-dimensional objects like matrices had to be represented using images. I created macros to produce the right format based on what kind of document I was trying to produce.
  • I used tikz/pgf and quantikz for the figures, especially the quantum circuit diagrams. I externalized the figures as JPEG images. It took quite a bit to figure out how to get them to be the right size for the Kindle version.
  • Some math expressions in the book and chapter tables of contents have weird spacing if they involve subscripts or superscripts. This is an artifact of the Kindle software. This did not happen, for example, when I viewed the book in the Apple Books app.

Some practical things you can do to learn about quantum computing

People often ask me “Where should I get started in order to learn about quantum computing?”. Here are several steps you can take. I work for IBM, so things I link to will often be to the IBM Quantum program. Also, I acknowledge that several of the links and videos toward the beginning involve me, but we’ll get through those quickly.

Watch some introductory videos

If you only watch one video, watch this one from WIRED with Talia Gershon:

This one with me is from early 2019 and discussed the IBM Q System One:

Finally, this video from CNBC with Professor Scott Aaronson of the University of Texas Austin, Martin Reynolds of Gartner, and me brings things up to date in January, 2020. Note that I personally do not support many of the statements about “Quantum Supremacy” (horrible label, supercomputers do have massive amounts of storage, off-by-15-million-percent math error):

Get a book

If you are really just getting started and want to systematically work through the required math at an easy and conversational pace, my book Dancing with Qubits should prepare you for more advanced material and give you a start to reading research papers. (Shameless self-plug.)

If you are a hard core physics and/or computer science person, you want to have Quantum Computation and Quantum Information: 10th Anniversary Edition 10th Anniversary ed. Edition by Michael A. Nielsen and Issac L. Chuang in your library. It’s a little old by now, but if you want to end up doing quantum computing research, you will likely have to become very familiar and comfortable with the contents. Other books to consider are Quantum Computing: A Gentle Introduction (good on algorithms, “gentle” is subjective!) and Quantum Computing for Computer Scientists (a bit dated and make sure you get a copy of the errata).

Play a game

Hello Quantum is available for Apple iOS and Android and will teach you the basics of how quantum gates and circuits work.

Hello Quantum screen shots

Build and run circuits with a real quantum computer

Quantum simulators have their place for basic education, experimentation, and debugging. Note, though, that a quantum simulator is to real quantum computer hardware as a TV console flight simulator is to a real plane. If you want a job as a pilot, I would prefer you knew how to fly an actual airplane.

The easiest way to get started without writing code is with the IBM Quantum Composer within the IBM Quantum Experience.

The IBM Quantum Experience has over 200,000 registered users, so you’ll be joining a very large community of beginner, intermediate, and advanced users.

IBM Quantum Composer

Learn Python

If you are going to write quantum computing code, learn Python. As I write this, the latest version is 3.8. You want Python 3, not Python 2.

Learn Jupyter Notebooks

This is the modern way of developing full documents with interactive code, executions, graphics, videos, and visualizations. It’s used within the IBM Quantum Experience but also many other computational and AI applications. You are mainly interested in how to use it through a browser, not how to run and maintain the console.

Website (introductory): Introduction to Jupyter Notebooks

Write quantum computing code in Qiskit

Qiskit is the leading open source platform for developing quantum computing code and applications. It’s available on Github and available under the Apache 2,0 license. It’s had over 300,000 downloads but I’m recommending you use it through your browser on the IBM Cloud. As with the Composer, it is available through the IBM Quantum Experience.

Whether you want to download Qiskit or use it online, the easiest way to get get started is to watch the series of videos by Abe Asfaw.

From there, you can watch the other videos and also learn about the Qiskit Community.

At this point you are ready to work your way through the online open source Learn Quantum Computing through Qiskit.Open source Qiskit textbook

My #BCTECHSummit 2019 talk

Bob Sutor speaking #BCTECHSummit in Vancouver in March, 2019
Photo courtesy of IBM Canada

I spoke this morning about quantum computing at #BCTECHSummit in Vancouver, British Columbia. Here are some of the points I emphasized:

  • The mainstream efforts including IBM Q are universal quantum computing systems with the eventual goal of full fault tolerance.
  • However, we believe “Quantum Advantage,” where we show significant improvement over classical methods and machines, may happen in the next decade, well before fault tolerance.
  • Don’t say “quantum computing will.” Say it “might.” Publish your results and your measurements.
  • Since May, 2016, IBM has hosted the IBM Q Experience, the most advanced and most widely used quantum cloud service. Over 100,000 users have executed close to 9 million quantum circuits. There is no charge for using the IBM Q Experience.
  • Qiskit is the most advanced open source framework for programming a quantum computer. It has components that provide high level user libraries, low level access, APIs for connecting to quantum computers and simulators, and new measurement tools for errors and performance.
  • Chemistry, AI, and cross-industry techniques such as Monte Carlo replacements are the areas that show great promise for the earliest Quantum Advantage examples.
  • The IBM Q Network is built around a worldwide collection of hubs, direct partnerships, academic memberships, and startups working accelerate educations and to find the earliest use cases that demonstrate Quantum Advantage.
  • Last week IBM Q published “Cramming More Power Into a Quantum Device” that discussed the whole-system Quantum Volume measurement, how we have doubled this every year since 2017, and how we believe there is headroom to continue at this pace.
Verified by MonsterInsights