No More Empty Spaces

by D. J. Green

In this thoughtful, engaging novel, divorced father, single parent, and geologist, Will Ross attempts to make a better life for his family, Kevin, Rob, and Didi. We all know there is no “handbook” on raising children. Will blunders along with his often-misguided intentions of being a good dad. Between his family, his demanding divorced wife, and work obligations, he is clueless regarding his children’s needs and desires.

He receives and accepts a job in Turkey which he considers an excellent opportunity for the family to experience a different culture. However, amazingly, he fails to consider how they will react to the change. Teen-aged Kevin reacts negatively and the resulting rift with Will becomes almost insurmountable.

The exploration of family dynamics and conflicts, work-related ethical considerations, and appealing, well-developed, and relatable characters all combine to create an outstanding, insightful and heart-warming novel. I especially enjoyed the character Paula, a strong, steady woman. I am eagerly looking forward to reading Green’s next book. Green is local to Placitas.

From the publisher:

  1. J. Green is a writer, geologist, and sailor, as well as a bookseller and partner in Bookworks, an independent bookstore in Albuquerque, New Mexico. She lives near the Sandia Mountains in Placitas, New Mexico, and cruises the Salish Sea on her sailboat during the summers. No More Empty Spacesis her first novel.

The following two books about artificial intelligence complement each other and although they are similar in content, offer differing perspectives.

Final Invention: Artificial Intelligence and the End of the Human Era

by James Barrat

“The intelligence explosion idea was expressed by statistician I. J. Good in 1965: ‘Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.’”

“Humans need to figure out now, at the early stages of AIs creation, how to coexist with hyperintelligent machines. Otherwise, we could end up with a planet — eventually a galaxy — populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.” ~ James Barrat

 Elon Musk, along with other notable visionaries recommends Our Final Invention as a book everyone should read about the future.

What will it be like if and when we share our world with an intelligence a million times more intelligent than us? Will our species, Homo sapiens, become extinct as the other less intelligent species in the genus Homo did in the past? Numerous significant decisions are already made by AI, such as indispensable tasks in our national infrastructure and complex medical diagnoses. Then there is Google’s autocomplete predictions, the ChatGPT software program that answers questions, smartphones, Amazon’s book suggestions, Siri, and Facebook’s recommendations of friends to connect with. These are very basic types of artificial intelligence. Consider if computers controlled more aspects of life and could truly think for themselves. We tend to anthropomorphize intelligent machines like Siri, robots, etc. They are not innately friendly or feel empathy, unless the features are programmed in them, which is unlikely, although scientists are attempting to figure out how to do so. “Scientists do believe that AI will have its own drives. and sufficiently intelligent AI will be in a strong position to fulfill those drives.”

Artificial General Intelligence (AGI) is defined as a  machine with intelligence comparable to that of a human across all of the domains of human intelligence, with both self-awareness and the ability to learn from errors and improve its performance. Artificial Super Intelligence (ASI) refers to a machine whose intelligence exceeds that of the most intelligent human. Since a self-aware intelligent machine will be able to modify its own programming, as opposed to the slowness of evolution, an AGI could easily and quickly evolve into an ASI via a process known as an “intelligence explosion.”

This extensively researched book chronicles and assesses artificial intelligence and its potential risks in an understandable layman’s manner. It presents significant issues and existential threats to consider among all of the beneficial things we know about AI.

Another subject Barrat delves into is the economic and military pressure to improve and fast track AI performances. He also discusses the relationship between AI and cyberware and the incredible number of malicious cyberattacks by entities who hack into software in order to manipulate software programs.

Cyberattacks are the new weapons of choice today. For example, “…cyberattacks are now a basic part of China’s national and defense strategies. Why spend $300 billion on the Joint Strike Fighter program for the next gen fighter jet, as the Pentagon did in their most expensive contract, when you can steal the plans?” The safety of AI will depend upon the designers and those who influence them.

James Barrat is a documentary filmmaker and has written and produced films for National Geographic, Discovery, PBS, and other broadcasters in the United States and Europe for several decades.

Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom

Superintelligence is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” ~ Nick Bostrom

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. … [The] sensible thing to do would be to put it down gently, back out of the room, and contact the nearest adult.  [But] the chances that we will all find the sense to put down the dangerous stuff seems almost negligible…Nor is there a grown-up in sight. [So] in the teeth of this most unnatural and inhuman problem [we] need to bring all our human resourcefulness to bear on its solution.” ~ Nick Bostrom

Highly recommended by both Elon Musk and Bill Gates, Superintelligence is a comprehensive analysis of the challenges and future of artificial intelligence. It explores the questions regarding the potential for AI to become a “superintelligence,” thousands of times more intelligent than humans, and, soon after its creation, via its self-learning ability, could possibly become a conscious being and dominate human beings. 

The author states he focuses in his narrative on possible risks rather than the positive aspects, “since it seems more urgent to develop a precise detailed understanding of what issues could go awry, so they can be avoided.” More sophisticated than Our Final Invention, it’s a more detailed and in-depth assessment of AI.

According to Bostrom, he consulted with 160 eminent AI researchers. “He discovered 50% of them think that an artificial general intelligence (AGI), an AI which is at least our equal in intelligence will be created by 2050. 90% of the researchers think it will arrive by 2100.” If scientists and engineers cannot ensure AI to be human friendly and control its capabilities and constraints, we will become extinct. How can we ensure AI will not override software and/or programming? Can it become an independent conscious being? Who will control AI: Billionaires? Countries? These are only some of the questions explored in this thoughtful book concerning the future of AI.

A machine with superintelligence would possess the ability to hack into vulnerable networks via the internet, take over mobile machines connected to networks connected to the internet, use them to build additional machines, invent quantum computing and nanotechnology, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He is the most-cited professional philosopher in the world aged 50 or under.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. His work has pioneered some of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds.

Adult book reviews are by Susanne Dominguez.

About the author