The age of intelligent machines is upon us, and we are at a reflection point. The proliferation of fast-moving technologies, including forms of artificial intelligence, will cause us to confront profound questions about ourselves. The era of human intellectual superiority is ending, and, as a species, we need to plan for this monumental shift.
A Human How Artificial Intelligence Is Redefining Who We Are examines the immense impact intelligent technology will have on humanity. These machines, while challenging our personal beliefs and our socio-economic world order, also have the potential to transform our health and well-being, alleviate poverty and suffering, and reveal the mysteries of intelligence and consciousness. International human rights attorney Flynn Coleman deftly argues that it is critical we instil values, ethics, and morals into our robots, algorithms, and other forms of AI. Equally important, we need to develop and implement laws, policies, and oversight mechanisms to protect us from tech's insidious threats.
To realize AI's transcendent potential, Coleman ad- vocates for inviting a diverse group of voices to participate in designing our intelligent machines and using our moral imagination to ensure that human rights, empathy, and equity are core principles of emerging technologies. Ultimately, A Human Algorithm is a clarion call for building a more humane future and moving conscientiously into a new frontier of our own design.
Artificial Intelligence is almost upon us. Lots of people are using Siri on their phones or have an Alexa or Google home device to help them organise their busy lives. But as hand as these are, the next generation of AI is going to revolutionise the world in many different ways and cause us to ask many searching and profound questions about this technology in our lives. Will it be the end of humanity? Or can these technologies be used as a power for good?
It is thought that there are around 700 people working on AI in one form or another around the world and there are about another 70,000 software engineers who understand how it functions. The problem it this tiny subset of people who have in their hands something that has the possibility to dramatically affect up to 7 billion people around the world in good and bad ways. One of the issues that are affecting the development of AI is that there is almost no diversity of voices that are contributing to this technology. Black and Latino developers are conspicuous by their absence. For example, one conference had seven black attendees, only one of which was a woman. Therefore as it is developed by a very narrow clique, the majority who are white, male and have often attended one or two of the major universities, it is inherently very sexist, racist and biased
It is said that the first trillionaire in the world will be the person who makes AI a reality. Worryingly there are no global standards on AI systems, nor are there any moral guidelines to help structure some of the internal decision making. There are significant gaps between those building the technologies, those policing it and those who will be affected by it, It does seem to be more chasms than gaps though. AI automation will also lead to mass changes in employment at the lower level. This was beginning to happen before the COVID pandemic hit, extenuating the financial gulf between rich and poor is widening day by day.
One place that you will find AI starting to proliferate, is social media. It can be great, but it can be a next of vipers too, as well as an echo chamber and brings the worst out with tribalism and confirmation bias. Always remember, if you are not paying for a product or service then you are the product.
In amongst all the bad news though there are some positive effects of AI. It is being used to work on projects that promote sustainability and humanitarian use, drones can be used to deliver food and medicine to remote areas. Another scheme is using it to make incarceration more humane and allow better rehabilitation of prisoners. Another sphere that shows great promise in is healthcare. Doctors cannot know every single disease or illness out there, the ability of Ai to crunch data showed in a clinical trial that medical assistants using the tool were accurate 91% of the time, without having to use labs, medical imaging or even having sat exams. The software developed by IBM called Watson AI read 25 million medical papers in a week or so and could recommend treatments that it had found in obscure medical trials. There are even robots that have begun to communicate with each other in a language that we cannot understand.
The fundamental question that this book asks is, do we want AI to help us or become a monster? If we do this right then we gain a brilliant new partner, if we get it wrong it could be the advent of a new dark age and we all suffer. Is it just me that is thinking of the Matrix or Skynet? How will we as a global population react to AI and robots? If the paranoia about the new 5G mobile networks is anything to go by, it might not go that well.
Fiction is empathy technology (Steven Pinker)
Colman puts both sides of the argument for and against AI really well in this book. Whilst it has the potential to be a force for good, she is careful to detail the ways in that it could be an utter disaster. She explores all manner of subjects that are connected to AI, from the history behind it, the economy and even what is consciousness and can AI become conscious? It is written with clarity about a complicated subject. There is no moral machine without a moral human and the key behind getting a useful technology that works for all and not just a techno-rich elite, is empathy, that ability that humans have to feel what other people are feeling. Sadly it is an emotion that is sadly lacking in today’s world. It is essential to our survival to include it in AI. Highly recommended.
Such a compelling and important work by Flynn Coleman. Artificial intelligence is already here and will only become more crucial and fraught, yet we have thought far too little about its implications for our society. The author does an exceptional job in teasing out many of the ethical and moral issues that need to be addressed with AI, as well as the policy challenges.
I particularly enjoyed the chapters on public policy—the first setting out worst case scenarios, the second on massively beneficial potential, and a third on policy proposals. Of the first two chapters, I found the first (the pernicious threat) the more likely to emerge, unfortunately. We’re already seeing usage of AI in service of repression (facial recognition technology used to identify protestors in Hong Kong; Chinese digital censorship, including even blocking images of Winnie the Pooh as mocking Xi Jinping; Russian bots and algorithms used to disrupt US social media and interfere with our elections). And as Coleman points out, the risk of encoding our own prejudices in AI is very real. AI could also exacerbate current trends of extreme income and wealth inequality in our winner-take-all society.
For that reason, I enjoyed and paid close attention to Coleman’s proposed policy solutions. The tricky thing with regulation is that it is often reactive to problems rather than proscriptive. There are good reasons for that, generally speaking: the United States has a free and vibrant economy, including in the tech sector. But the potential perils of AI militate in favor of a more robust and proactive regulatory scheme in order to counter some of the threats above.
From a broader viewpoint, I appreciated the consideration of Universal Basic Income and other policies that could mitigate some of the negative effects of increasing automation in our world. I think it’s unlikely that algorithmic tools and models will convince lawmakers to take a technocratic focus to improve our society, and I think it’s important to be clear-eyed about why that is the case: one political party in the US, the Republican Party, has rejected evidence-based politics in the fields of climate change, tax policy, immigration policy, and many others. Our political system overweights rural and Republican interests, making it difficult to implement progressive change even when that’s the preference of a majority of citizens.
Difficult, but not impossible, and this book provides a broad road map for the next progressive administration to consider how to use the transformative potential of AI to better our world. And as important, the book inspires all of us to consider how to broaden our own views of the interconnectedness of our world, and the importance of approaching it with empathy and perspective.
After I finished reading Kai-Fu Lee’s “AI Superpowers: China, Silicon Valley & the New World Order� I became very interested in a deeper understanding of AI, it’s integral components and how it is forecast to be employed in the coming months, years and the coming decade.
I took the easy way and browsed the available books on AI recommended by Audible & ŷ and trusted on a thrilling outcome. Not so with this book.
Instead I read what sounded more like a college research paper on “AI as it relates to future society� with dozens of quotes from notables for every single chapter. The author displays a working understanding of AI but she is far from an “expert� rather she posits on the societal challenges AI will present as if lecturing to an audience of “policy experts� from nations around the world. I have to admit to almost putting book down several times when all that was discussed were the macroeconomic & environmental ramifications of AI in the coming years I felt it was a combination of “Karla Marx� and Greta Thunberg. Not at all what I was aiming for, I’ll do a better job digging into book’s story, the author and what other readers had to say in future selections that don’t come via recommending sources
In addition to providing a fascinating overview of the role AI has in our current lives and its potential to develop in world-altering ways in the coming decades, this book goes much further, exploring the literary precedents shaping how we conceptualize AI (and how this relates to how we conceive the "other" in general), and delving deeply into the ethical questions that AI requires us to face. The book is concerned not only with what kinds of ethical safeguards we can or should put in place as AI becomes more powerful and omnipresent, but also with questions about how we should understand human ethics in the context of living with AI, why we hold the values we do, and how the emergence of AI can help us to imagine different, possibly better, futures for ourselves.
A phenomenal first book by Flynn Coleman. Equal parts insightful and entertaining, A Human Algorithm explores how technology now touches virtually every aspect of human life (from law and ethics to culture and politics). More importantly, it examines how we should influence tech at this pivotal moment in history. It’s eye opening and inspiring. Coleman’s expertise and optimism permeate every page. A must read for tech enthusiasts, legal scholars, ethicists, and every day humans who love well-crafted books.
اكثر ما لفتني هو ان الكاتبة متدربة على الكتابة وقامت بترتيب افكارها وتبويبها بشكل جيد.. هو كتاب عليك الا تعلي سقف توقعاتك من معلوماته، فالكاتبة ربما لا تعرف اكثر منك في مجال الذكاء الصناعي، وهو لا يعد اصلا بمعلومات.. هو كتاب انساني حول البحث عن طبيعتنا البشرية واعادة تعريفها بدلا من التخوف الدائم من التكنولوجيا كانت تجربة جميلة.. ولكن ليست مميزة.. ولا اعتقد انه اضاف لي الكثير او اعطاني شيئا سأتذكره دائما.... وليس ذمًا في الكتاب
Such an interesting book and SO important for this time in our world. Flynn really knows what she is talking about and has such a diverse set of companies/examples to bring into her work. It is no chore reading this book.
Artificial intelligence (AI) is the process where programmers give learning algorithms to computers. Computers use these algorithms to analyze vast data sets and then create their own algorithms, akin to what parents do with their kids. Movies, such as the “Terminator� series and �2001 A Space Odyssey�, and some books and articles seem to have given the general public some notion that AI ought to be closely regulated so as to avoid something going radically wrong, but there is little agreement on what this regulation should look like and thus not much of it has come to pass, at least not in the U.S.
Meanwhile, the initial form of AI (narrow AI) is already upon us and the harm it can do is already visible. The U.S. 2016 presidential election is perhaps the most well known instance of harmful AI. Facebook now admits that as many as 126 million Americans viewed disinformation posts by Russian operatives. About half of the pre-election Twitter news posts in Michigan were found to be fabricated or untrue. There are many other examples but, if AI in its infancy can have an impact on the heart of our democracy, it can do a lot of bad stuff.
At the same time, however, AI has (or fairly soon will have) the ability to generate the diagnosis and treatment of diseases as if all the top doctors in the world were at one’s bedside. Sometimes, I even find it useful on a day to day basis if I’m looking for a particular product but, I always have to wonder if what I’m looking at is really a good product or just the result of some good marketing.
And therein lies the problem. Writing instructions (algorithms) for a computer used to be relatively simple. The programmer had to know something about computer code and engineering but only had to also know how the process they were writing code about actually worked (and, too often, the programmers still got it wrong). When you start writing code for a computer to take over the programming, you enter a whole new dimension where all kinds of moral, ethical and legal data needs to be considered along with computer engineering logic. It’s like having learned to swim in a pool and then being thrown into a tsunami. We are, in fact, swimming out to meet the tsunami oblivious of the danger.
Flynn Coleman, in her new book “A Human Algorithm�, points out “while developers understand generally how to build AI . . . understanding exactly how these systems work is largely unknown�. And the amount of time we have to get a proper handle on the unknowns is rapidly decreasing as “machine intelligence [could] surpass human intelligence in less than thirty years�.
This shouldn’t be surprising when you think about it. AI is, after all, modeled on human learning but our knowledge of how humans think and make decisions is still a matter of great debate. And that debate really hasn’t gone much beyond the scientists and philosophers yet. If you want to get a feel for where most of the U.S. is at regarding an understanding AI, much less regulating it, take a look at CSPAN’s recording of the June, 2018 House Science, Space and Technology Committee meeting on AI (). When Congressman Veasey asks what are the dangers of AI he gets a general response along the lines of “well, when you are dealing with a new technology there are a lot of unknowns, some of which could be harmful�.
So it was with immense relief that I read Ms Coleman’s book. Finally, here is a book on AI that any that citizen can read and understand without having degrees in computer science and philosophy. And yet it still contained a breadth of content that I found astonishing. It really is a book that gives people what they need to know so that someday it will be possible to realize the immense benefit of AI without having to flirt with a disaster that could rank up there with climate change (the ultimate demise of the human race).
It’s a scary subject, to be sure, but the added beauty of Ms Coleman’s book is that, beyond successfully blending the pitfalls with the promise of AI and explaining the steps we need to take to insure the promise, she extrapolates what this promise can mean to our society as a whole. It’s far more than being vastly more efficient and smarter. If we’re going to continue full steam ahead with making machines that are infinitely more intelligent than we are, we’re going to have make sure that these machines also have a moral and ethical code the likes of which humans have only dreamed of, things like true equality and the elimination of bias.
That could be the prettiest picture of the future of the human race we can imagine. But it will require work, really smart work. Probably the smartest thing we’ve ever done. But Ms Coleman has shown us the way.
An extremely poorly written book with very little insight into the subject matter. There is no coherent narrative and it looks like it was sloppily put together from notes from books. The book doesn't even try to disguise it with quotes and footnotes every few lines. There is no logical flow or even an attempt at that. The resulting mess is like a poorly written dry academic paper or Wikipedia article with lots of references without even a pretense of editing. Almost 40% of the book is footnotes. Go figure.
Diverse concepts such as quantum computing, eastern mysticism, Uncanny valley, and Universal Basic Income are introduced without any concepts and forgotten a couple of lines later. The book stink of intellectual laziness at even making sense of the material at hand. I breezed through it but still 3.5 hours of my life that I can never get back.
There are much better books on this topic. The author seemed well versed in the topic, but was taking talking points from the research without developing an engaging narrative. I would rather read the reference materials than this book.
I enjoyed this book, but I felt it was a little preachy, a little too broad of strokes in talking about AI, and a little whiplash prone bouncing back and forth between the best and worst case scenario of the application of AI in the future. That being said, it was a quick read for a broad overview of a lot of the big ideas surrounding AI. I also appreciated the focus on the ethics and philosophy of AI. The infographics interspersed through the book were very useful for a broad overview of a timeline of ideas. It also includes very thorough citations and notes for further reading - though not all of these are primary sources. In at least one case I noticed a misrepresentation of a tool - Semantic Scholar was described as an AI tool that can help medical professionals combine data from thousands of sources. This definitely is not the reality - it’s more of a way to look at connected scientific literature, it doesn’t do anything to pull data and synthesize broad conclusions from multiple papers.
Philosophy lesson or a book about AI? Mostly okay. I guess I expected it to be a bit more technical and include a lot more... examples. Instead, this book was really preachy in places. Like, yes Ms. Coleman, we get it. We have to DO SOMETHING about the "Intelligent Machine Age" (barf, why would you even call it that? Just say AI *eye-roll*). But DO WHAT? Sigh.
A fantastic overview of AI, the fears, and the hopes. Found it very enlightening as I don’t know much about AI. Seemed very fair in her arguments and argued both sides of the spectrum. Also went into how AI will/does/could affect every aspect of our lives. Very comprehensive without getting too into the weeds. Maybe slightly outdated as I believe this was written in 2019 and things have changed in the 5 years since then but still very relevant. Highly recommend
This is very well-written, thickset with information, and I'm appreciative of its primary message of the necessity of inclusion of diverse voices in the decision-making process regarding the development and regulation of AI. However, this book is marketed and exalted for being 'groundbreaking', and it is not. I am trying to think why and I think it's because the author doesn't particularly utilize her unique specialty, as an acclaimed human rights lawyer, to originate a publication-worthy perspective that galvanizes the discourse around societal impacts of AI. Thus, the author does not originate a new central insight on AI or on the upheaval of exponential computation on our society, in comparison to the ways that, for instance, Safiya Noble identified the implicit racial biases of algorithm-making, Shoshana Zuboff crafted the topic of 'surveillance capitalism', Frank Pasquale designed potential laws of robotics that would steer the development of AI towards 'intelligence augmentation' and the reinforcement of human talent, or Nick Bostrom devised 'superintelligence' as a lens to examine the effects of possible achievement of AI agency and sentience. Because of this lack of striking insight, this book paddles through a wide-ranging torrent of informative tidbits about tangential topics such as the history of computing, to the different types of machine learning, to the lack of ethics classes for engineering undergraduates, to the attractions of UBI, to musings about the forthcoming future of automation and human redundancy, to the possibility of consciousness in the cosmos. These are all very important topics and it is great to have these disparate topics, which are linked together by the broad subject of AI, compiled together in a single introductory reference such as this book, however, it almost seems like overcompensation for the lack of unique, pioneering insight that this book doesn't seem to have. A lot of the information here could be gleaned from a cursory dive into linked Wikipedia topics, and the perspectives encoded in this book may be familiar already to people with at least a passing reading interest in the literature of AI and society. This book is almost a journalistic treatment that could be written by a general researcher. Nevertheless, this book is still worth reading because it is a well-written, self-contained treatment of the societal repercussions of AI. For a handbook on the most essential quandaries of the configuration of society by AI, this book is great. Just don't expect it to be extremely groundbreaking.
Read this book along few others about AI---I really liked it. Flynn really has a gift when it comes to explain all different points of view. I do love her own way of seeing things and how she sees everyone else's opinion and ideas. I can appreciate people's fear of the unknown--but this book also makes us aware of the beauty of the possibilities. More if these are POSITIVE alternatives. In other words-it would be phenomenal if all of us working on AI get together and make sure it is used for the benefit of all. I think that would be an amazing accomplishment.
Reality is that only the powerful has the way (money) to make this possible. Although I do like the idea and I agree with her that--we all should do our part-if we have a way-to make our voices heard. AI is not an open innovation model--as she lists all companies also working in secret towards some secret outcomes too. It will be interesting to see what our future holds, from all possible outcomes.
I read so many excellent quotes--I will need to get my hard copy so that I can use them for my classes. I do like to use them--so that I could remember them.
Recommend this book for anyone interested on seeing how AI is being developed and why...
I tried to keep an open mind while reading this book written by a human rights lawyer, but it’s very clear that the author is not a technologist. The first three chapters are full of hand waving, fear mongering, buzzword slinging, misrepresentations, misunderstandings, and variations on the mantra “technology ought to be ethical�.
Having read as far as I did, it seems pretty clear to me it’s not going to get any better. I recommend books like Rage Inside the Machine instead.
We are nowhere near intelligent machines, by any formal definition. Though some experts posit we might achieve some success as soon as 2050, many remain skeptical.
The author also seems to imply that computer scientists are ignorant of ethical concerns. This is certainly untrue, and courses on ethics in technology have been required in CS programs since at the latest 2007, I can attest from my own experience. While the legal aspect of this conversation is important, this book does more harm than good, in my opinion.
Reading such books make me more convinced that the real threat to human kind is: modern liberalism (aka social liberalism).
Yes. It is not war, famine, disease or even AI (artificial intelligence).
So the basic theory of this book is that if you are human (especially white male- thanks God I’m not white!) then don’t expect that you are more intelligent or have any value more than anything in the universe. So a dog, a cockroach a computer is at least as good as you.
Humans therefore invent AI so that the machine will teach as to be human, to understand our consciousness and fill our empty soul!
Yes this is the devious thinking of the author.
Modern liberals by downgrading the values that has shaped the development of a civilized society and normalizing perversion is the threat for society and humanity.
This book was a delightful, thought provoking surprise from its witty chapter titles, to the very surprising directions it took taking a look at underexplored paths in the subject of AI, for example challenging the underlying assumption in most AI that the human model of intelligence is the only way to go.
Moreover, the author's discussion on AI from the perspective of a human rights lawyer who has spent time abroad is timely as we attempt to meet the simultaneous challenges of increasing global integration brought about by technological progress and commercial necessity, dealing with the impact of AI and what the future holds for humanity.
Coleman set out to write a book advocating for her political agenda under the guise of writing about AI. AI takes a backseat and is only woven into the narrative every now and then when she remembers what her publisher wanted the book to be about. AI is mentioned at the end of every paragraph or two - saying that it could advance her vision - without providing specifics on how that would work. I got a discourse on the left’s political agenda without learning much at all about the technology or it’s implications
Once we move from a human-centric view of life on Earth to one where we share the earth with intelligent machines, a clear picture emerges of the work needed for us to best position humankind for survival. An optimistic perspective on the areas to address to achieve this, our skeletons we will need to face and a possible path towards a human algorithm.
Definitely a must read with the technology in our lives ever growing! I listened to the audio book for the UCONN School of Business Alumni Book Club & while it started slow, I’m glad I pushed through it. Not a book I would have chosen to read on my own, which is exactly why I join book clubs!
The book is a must read to understand the scenerio where AI will make or break the humanity I really like the comprehensive scenerio that ths book describe across multiple industries and the question of ethics associated with these svenerio
Basic. No new ideas and no depth in any of the discussion. Flits between one famous person's hot take to another without adding any value. Good for your aunt who doesn't know how to use her computer or the stack of unloved books in a launderettes' local neighborhood book swap.
I loved this book! It takes us on a very human journey of who we are by exploring our technological past and looking into our uncertain technological future - reminding us all along that we all have a role to play in defining the next epoch of our existence.
"A Human Algorithm" is about a lot more than AI, which it speaks to very eloquently, it is a look at all of humanity through a technological lens. Beautifully written and inspiring.
Everyone should read this to gain an understanding of the challenges, benefits and concerns we should be discussing regarding both existing AI and that which is quickly on the horizon.