Before I discuss what and I how I wrote, let me talk about the markup of the book. By “markup” I mean the underlying format of the content that determines its structure such as the title page, table of contents, parts, chapters, sections, paragraphs, bibliography, and the index, along with font styles and sizes.
In my experience, most publishers, both traditional and online, prefer you to use Microsoft Word to create the book, and it has its own underlying markup language that you typically never see. In a more-or-less what-you-see-is-what-you-get way, you can write and style the book.The publishing workflow is often based on this choice.
My requirements for the book creation process included:
beautiful math rendering, both in sentences and displayed multi-part formulas,
built-in support for generating diagrams,
easy methods to change formatting throughout the book quickly, and
good support for working quickly on a large text.
Regarding the size of the book, in early 2019 I thought the book would come in around 300 pages and I would have a complete draft on September 1. I ended up writing a book with slightly more than 500 pages with the first full draft delivered on October 9. I had full drafts of various chapters before then, but that was the first time there were no sections with TODO markers.
Word has come a long way on many of these requirements, especially the math, though it can be very laborious to create a book with hundreds or thousands of formulas. Here’s the real problem though: eBooks with math in them often look terrible if you put them in a reflowable format. That is, if you let, say, your Amazon Kindle change the fonts and the line widths, the math just doesn’t look right.
People argue about this forever, but there is an excellent chance that you will end up with fuzzy, misaligned expressions that are the wrong size compared to the surrounding text. So, I early on made the decision that the eBook would not be reflowable. Since that was the case, there was no reason for me to stick with Word. I decided to markup the book in LaTeX. Luckily, Andrew Waldron at Packt Publishing agreed. [Though see this later development regarding the eBook.]
With LaTeX, you have complete and arbitrary control over all parts of the formatting. There are thousands of packages that make your life easier by providing significant functionality that you would not want to write yourself.
LaTeX has
the best math formatting facilities of any system,
a full macro programming language for formatting control and calculations, and
easy ways to break a document into sections so you can work on one part at a time.
If you get into macro programming, things can get complicated. I’ve been doing it for 30 years, so it doesn’t faze me. Here are two good books on LaTeX to get you started:
With a month to go before publication, we are still making last minute tweaks to my quantum computing book Dancing with Qubits. We made two changes to the cover this week. Can you spot the differences?
The old version is the first image, the new version is the second:
How do you interpret the change made in the subtitle?
Way back in 1992, Springer-Verlag published my first book Axiom: The ScientificComputation System, co-authored with the late Richard D. Jenks. Since then I’ve thought of writing other books, but work and life in general caused enough inertia that I never got around to it.
I first got involved with IBM’s quantum computing effort in early 2016. By 2018, I was again thinking of writing a book and this subject was an obvious candidate. How would I start? What would I say? What was my perspective on the topic given that there were already some excellent books?
To write a book, you have to start writing. This is obvious, but no less true and important. In the summer of 2018, I started writing what I thought would be the introduction to the book. My perspective was, and is, very much from the mathematical and computer science directions. To be clear, I am not a physicist. If I could produce a coherent introduction to what I thought the book would cover, I might convince myself that it would be worth the hundreds of hours it would take to complete the project.
When I recently announced that the book was available for pre-order, my industry colleague Jason Bloomberg asked:
“So where does it fall on the spectrum between ‘totally accurate yet completely impenetrable’ and ‘approachable by normal humans but a complete whitewash’?”
I responded:
“I bring you along … to give you the underlying science of quantum computing so you can then read the “totally accurate but formally impenetrable” texts.”
I decided that I would cover the basic math necessary to understand quantum computing, and then get into quantum bits (qubits), gates, circuits, and algorithms. Although readers with the necessary background (or perhaps a good memory of that background) can skip the mathematical fundamentals, I decided to take people through the algebra and geometry of complex numbers, linear algebra, and probability necessary to understand what qubits are and what you can do with them.
That early draft of the book’s introduction described roughly 15 chapters divided into three parts. The final book has 12 chapters and 2 parts. That introduction eventually became the Preface. Part III eventually became Chapter 1.
It’s much tighter than what I imagined it would be, but there is still material I could have covered. There’s a natural tendency to want to add more and more, but I kept asking myself “What is this book about? How deeply do I want to go? Am I getting off track? Will I ever finish?”.
As 2018 went on, I kept tweaking the introduction and I started talking to publishers. In November, I started writing what was then the first chapter. Although I started in Microsoft Word, which is overwhelmingly the format of choice for many publishers, I quickly switched to LaTeX. This produced a far more beautiful book, but also placed constraints on how I could publish the book.
With this as teaser, in future entries I’ll talk more about the writing process, choices I made, LaTeX packages I used and macros I wrote, deciding how to publish the book, and working with editors. Once the book is available, I’ll talk about the specific content and why I included what I did.
I had a great time talking to Lisa Abramowicz at Bloomberg Radio in New York City this morning about quantum computing and IBM Q. You can listen to a recording of it online.
It was a beautiful spring day in Boston last Saturday, April 6, when my IBM Q colleague Melissa Turesky and I headed to the Museum of Science on the Charles River. It was a special event, “NanoDays with a Quantum Leap,” and I spoke about the IBM Q quantum computing program and how people could start coding it today.
I was most happy to see how many young people were at the museum and participating in the NanoDays events. A lot of what we are doing now with quantum computing is education and I hope that exhibits like this will encourage girls and boys to learn more about the area. I’d love to have someone tell me in 10 years that the museum exhibit inspired them to pursue a quantum-related STEM career.
Since 2016, over 100,000 people have used the IBM Q Experience and they have run over 9.5M executions. A 50 qubit model of the IBM Q System One will be in residence as part of the quantum exhibit until the end of May.
I spoke this morning about quantum computing at #BCTECHSummit in Vancouver, British Columbia. Here are some of the points I emphasized:
The mainstream efforts including IBM Q are universal quantum computing systems with the eventual goal of full fault tolerance.
However, we believe “Quantum Advantage,” where we show significant improvement over classical methods and machines, may happen in the next decade, well before fault tolerance.
Don’t say “quantum computing will.” Say it “might.” Publish your results and your measurements.
Since May, 2016, IBM has hosted the IBM Q Experience, the most advanced and most widely used quantum cloud service. Over 100,000 users have executed close to 9 million quantum circuits. There is no charge for using the IBM Q Experience.
Qiskit is the most advanced open source framework for programming a quantum computer. It has components that provide high level user libraries, low level access, APIs for connecting to quantum computers and simulators, and new measurement tools for errors and performance.
Chemistry, AI, and cross-industry techniques such as Monte Carlo replacements are the areas that show great promise for the earliest Quantum Advantage examples.
The IBM Q Network is built around a worldwide collection of hubs, direct partnerships, academic memberships, and startups working accelerate educations and to find the earliest use cases that demonstrate Quantum Advantage.
Last week IBM Q published “Cramming More Power Into a Quantum Device” that discussed the whole-system Quantum Volume measurement, how we have doubled this every year since 2017, and how we believe there is headroom to continue at this pace.
This talk was at the Linux Foundation Open FinTech Forum in New York City in late 2018. The title refers to the number of qubits not being the only significant metric for determining the power of a quantum computer.
I gave this talk at the Vanderbilt University School of Engineering in 2018. It’s one of my longer talks at just over an hour but goes into more details than most of my intros.