The Art of Being Human Chapter 5 Textbook Page

Summary

Hullo World: Existence Human in the Age of Algorithms by Hannah Fry (2019) is a layperson's guide to the important bailiwick of Bogus Intelligence, motorcar learning, automation and algorithms. Fry takes a complicated and highly technical subject and deftly presents, in evidently English, the central technical concepts and pressing upstanding and moral considerations. As a consequence, "Hello Globe" serves an an excellent primer for anyone seeking to better understand the implications of our highly algorithmic future.

The get-go two chapters of the book cover ii key concepts: algorithms and information. Algorithms are the logical sets of instructions that computers use to perform a task. Data are the inputs that machine algorithms make use of the output predictions, decisions, classifications and more. Fry is careful to highlight many of the critical questions that need asking: What are the underlying assumptions of a particular algorithm? How is information technology biased? Why is personal data being collected virtually me? How is information technology being used? What are the incentives for the data collector?

The remaining capacity—justice, medicine, cars, offense, art— look at specific applications for algorithms and data. Fry builds upon the foundation of the early chapters and introduces boosted concepts like determination trees, algorithmic errors, sensitivity and precision, neural networks, Bayes' theorem, and more. If these concepts sound intimidating, I tin assure you: they are not. The author builds on her narrative through a series of compelling stories (Malcolm Gladwell would be proud) while deftly incorporating important technical concepts.

The reader learns, for case, how image recognition algorithms can aid pathologists in the detection of breast cancer. We as well learn how geo-profiling and predictive policing tin can place high risk areas and how express law enforcement resources and be put to optimal use. It isn't all rose colored glasses, notwithstanding. Fry addresses the issues and issues raised by these new developments likewise. For instance, image recognition algorithms that are overly sensitive will result in many faux positives (e.g. identify people who are healthy as existence sick). A costly event of this is that healthy people might undergo unnecessary treatments.

I enjoyed this book far more than expected (this comes from someone who has some technical familiarity with machine learning). Information technology's easy to have a topic like this and deliver a dry read. That is not the example here; this book is eminently readable. Moreover, too much media attention on the topic of artificial intelligence has been gloom and doom. This book is a healthy counterpoint to the mainstream narrative. Fry presents a sober, just optimistic future; a future in which humans have a central role to play.

Pros: Writer explains a complex topic in an accessible way. Draws on a broad range of concrete examples to effectively illustrate her points.

Cons: Writing style tin be overly casual at times (which doesn't bother me, but might be a nuisance to others).

Verdict: 8/x


Notes & Highlights

Introduction

  • Understanding the human relationship between humans and machines is essential for agreement algorithms.
  • Disquisitional questions to ask individually and as a society:

    • Exercise nosotros empathise specific algorithms? How they work what their limits are? What are the pros and cons?
    • Does the specific algorithm event in a net benefit to gild?
    • When should nosotros trust machines over human judgement?
    • What kind of globe do we want to live in?

Chapter one: Power

  • Example: Deep Blue vs. Kasparov chess friction match in 1997.

    • IBM designed Deep Blue to appear more "uncertain" than information technology was: "the machine would occasionally hold off from declaring its move one time a adding had finished, sometime for several minutes." This unnerved Kasparov, forced him to 2nd-estimate himself (and the computer's capabilities) and ultimately, contradistinct his chess strategy in the friction match for the worse.
    • "The ability of an algorithm isn't limited to what is contained inside its lines of lawmaking."
    • "Understanding our own flaws and weaknesses—too as those of the machine—is the key to remaining in control."
  • Algorithms defined:

    • Dictionary definition: "A step-by-step procedure for solving a problem or accomplishing some terminate especially by a estimator."
    • "A series of logical instructions that show, from start to finish, how to accomplish a task."
    • Practically speaking the term connotes a mathematical or logical construct that uses math operations, arithmetics, algebra, calculus, logic and probability and manifests in the course of computer code.
  • Four algorithmic task-categories:

    • Prioritization:

      • Making an ordered listing.
      • Case: Google returns a ranked list of search results sorted by relevance.
      • Case: GPS/Maps apps identify the fast road to a destination.
      • Case: Deep Blue prioritized the next best motion from a huge fix of possible moves.
    • Nomenclature:

      • Picking a category.
      • Example: Cyberspace advertizing identifies/targets groups of users receptive to their marketing messages (based on common characteristics).
      • Example: Automated moderation tools that remove inappropriate content (e.g. spam filters).
    • Association:

      • Finding links and relationships between things.
      • Case: OKCupid and dating algorithms look for links between people [me: How is classification singled-out from clan? Is it that you have two human parties being matched?]
    • Filtering:

      • Isolating what's important. Separating the indicate from the noise.
      • Example: Oral communication recognition algorithms similar Siri, Alexa and Cortana which filter out the vocalization from the background noise before processing what you are proverb.
      • Example: Facebook and Twitter filter stories that chronicle to your known interest to assist y'all stay engaged.
  • Many algorithms use combinations of the in a higher place task-categories.

  • Two methodological paradigms for algorithms:

    1. Rules-based algorithms: Directly designed past humans with explicit instructions and logic.

      • Easy to cover. Can read the program and understand the underlying process.
      • Only usable in cases where humans know how to write the instructions.
    2. Machine-learning algorithms: Doesn't use precise set of instructions. Instead, begins with an objective and leaves it to the algorithm to determine how to arrive at that effect.

      • Black box: difficult for human observers to cover the logic behind the outcome.
      • Work well at problems where writing a list of instructions doesn't work (for instance image recognition of objects).
  • Regarding "AI" and machine learning: "[It's] more useful to retrieve of what nosotros've been through as a revolution in computational statistics than a revolution in intelligence."

  • "The Search Engine Manipulation Event" (2015 inquiry written report):

    • Experiment that sought to understand the impact of how search engines like Google influence our view of the world.
    • "When people are unaware they are being manipulated, they tend to believe they accept adopted their new thinking voluntarily." — Robert Epstein (psychologist and co-author).
    • People see algorithms as a "convenient source of dominance."
    • Results evidence a lack of user scrutiny and critical thinking towards the quality or veracity of algorithmic output.
  • "The merely mode to objectively judge whether an algorithm is trustworthy is by getting to the bottom of how information technology works."

  • "Having a person with the power of veto in a position to review the suggestions of an algorithm earlier a determination is fabricated is the but sensible way to avoid mistakes."

    • Case: The story of Stanislov Petrov, the Soviet military officer who prevented nuclear anything when he used his intuition to mistrust an algorithm that told him a nuclear strike from the U.s.a. was imminent in 1983.
    • Counter-example: The 2015 UK Smiler rollercoaster crash in which two roller coaster operators overrode a estimator algorithm that set off a warning system. The operators, confident that the ride was safe, ignored the warning system resulting in a catastrophic accident.
  • Algorithm disfavor: Phenomenon where humans are less tolerant of algorithmic errors than of their ain (even when the humans are more prone to error).

    • Nosotros tend to over-trust algorithms UNTIL they showroom errors or poor results.
    • Once an algorithm demonstrates fallibility, nosotros overcompensate in the other direction and begin to dismiss the generally useful output.
    • [Me: Human behavior of overcompensating too much in ane direction or the other based on small-scale data-sets and a difficulty comprehending probabilistic outcomes AND from separating sound processes systems from the results.]

Chapter two: Data

  • Data collection is used by businesses to understand consumer behavior and generate insights that can result increased profitability and competitive advantage.

  • Data collection is ubiquitous. Notable examples:

    • 1993 Tesco: Loyalty card that first tracked purchase amounts and later bodily items purchased. Tesco improved targeted offers to customers based on this information.
    • 2002 Target: Algorithm that identified expectant mothers based on purchasing behaviors.
  • Data brokers: Companies that purchase and collect personal information and so resell it for profit.

    • Notable data brokers: Palantir, Acxiom, Corelogic, Datalogix, eBureau.
    • Types of information nerveless: Online shopping data, newsletter signups, website registrations, warranties submitted, dwelling purchasing info, voter registration, browser history and more than.
    • Unmarried file is generated for each individual that aggregates and cross-references all of this personal data.
    • File includes: "Your proper noun, your date of nascency, your religious affiliation, your vacation habits, your credit-carte usage, your net worth, your weight, your elevation, your political affiliation, your gambling habits, your disabilities, the medication yous use, whether yous've had an ballgame, whether your parents are divorced, whether you're hands addictable, whether you are a rape victim, your opinions on gun control, your projected sexual orientation, your existent sexual orientation, and your gullibility."
  • Rich personal data allows for hyper-efficient matching algorithms.

    • Payday lenders can target people with bad credit scores.
    • Online betting tin can be targeted at people who frequent gambling sites.
  • "Business models are based on the idea of micro-targeting. They are gigantic engines for delivering adverts."

  • "Algorithms are trading on information yous didn't know they had and never willingly offered. They have fabricated your most personal, private secrets into a commodity."

  • The Facebook Cambridge Analytica scandal highlights a state of affairs in which individual and sensitive personal information was gathered and then used to manipulate those aforementioned individuals.

    • Cambridge Analytica used personality profiles and delivered charged political messages and fake news stories at particularly susceptible audiences.
    • Recall the finding from the 2015 Epstein study in Chapter ane: People are susceptible to manipulation from algorithms that are deemed authoritative.
  • Mainland china's "Sesame Credit" equally a future case for an omnipresent, government sponsored data broker.

  • To date there is little government regulation around data collection. Some notable exceptions:

    • European union'southward GDPR (Full general Data Protection Regulation). Prevents storage of data without explicit purpose and requires consumer consent for utilise of data.
  • "Whenever we use an algorithm—particularly a free one—we need to ask ourselves about the subconscious incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a merchandise I'm comfortable with? Would I be better off without it?"

Affiliate iii: Justice

  • Judicial system is not an exact scientific discipline. Judges cannot guarantee precision.

    • This fact is evidenced by the prominence of concepts like "reasonable dubiety" and "substantial grounds."
    • Appeals are an important office of the process.
  • Result is a system with some significant discrepancies in the handling of like cases and defendants.

  • Research studies over the by 50 years demonstrate how much disparity there is:

    • 1977 study: 47 judges shown an identical example and asked to render a determination. Verdicts ranged from setting accused free to being sent to jail.
    • 2001 written report: 81 judges asked to award bail to a multifariousness of imaginary defendants. Some of the cases were identical only the names of the defendants were changed. Most judges failed to brand the same decision on the same case when seeing it for a 2nd time.
  • "Whenever judges accept the freedom to assess cases for themselves, there volition exist massive inconsistencies. Assuasive judges room for discretion means allowing there to be an element of luck in the organization."

  • Burgess prediction tool (1928):

    • Algorithm uses 21 factors deemed significant in determining whether someone would violate parole.
    • Inmates are scored for each question (1 or 0) and can score upwardly to 21 points. Loftier scores betwixt sixteen-21 are deemed depression risk for recidivism.
    • Algorithm performed well: 98% of depression gamble grouping didn't break parole. Merely 33% of high risk group made it through parole.
  • Systemic trade-off between consistency and fairness.

  • Decision tree: A flowchart that takes a gear up of circumstances and assesses, step by step, what to exercise or what will happen.

(Prototype from "Hello World")
  • Ensemble method: The combining of several decision copse to generate improve predictive operation over the use of a single tree.

  • Random forrest: A large grouping of decision trees that, in concert, can be utilized to attain a consensus conclusion in guild to account for a wide assortment of circumstances and factors that a single (simpler) decision tree cannot.

  • 2 types of algorithmic errors:

    • False negatives: Failure to identify the person or thing the algorithm is looking for. Case: A criminal identification algorithm letting Darth Vader go free.
    • False positives: Incorrectly identifying a person or thing. Example: The same criminal identification algorithm incarcerates Luke Skywalker (who is innocent).
  • COMPAS is a proprietary risk-cess algorithm.

    • 2016 ProPublica investigation plant:

      • The algorithm'due south false positives were disproportionately blackness.
      • The algorithm's faux negatives were disproportionately white.
    • Is it possible to develop an "unbiased" algorithm?

    • Algorithms like COMPAS are probabilistic:

      • Outcomes will be biased because reality is biased.
      • Example: More men commit homicides so more than men volition be falsely accused of having the potential to murder.
      • Run across figure beneath which shows ratio of female person (top) to males (bottom) flagged by an algorithm as potential murderers. The ratio is 25% female and 75% male. If the algorithm is 96% accurate in that location volition be 4 imitation positives (which are disproportionately male).
(Image from "Hi World"
  • Should an algorithm reverberate electric current reality or should information technology encourage an alternative--east.thousand. more fair--reality?

    • Example: Google epitome search results for "math professor" could render photos of white men (electric current reality) or prioritize images of female not-white professors (reality we aspire to).
  • System one vs. Organization two thinking:

    • System 1: Automatic, instinctive, just prone to mistakes.
    • System 2: Slow, analytic, considered, but often lazy.
    • "Our minds aren't built for robust, rational assessment of big, complicated problems."
  • Cerebral biases impact the decision-making of judges:

    • Anchoring effect: The cognitive difficulty of setting numerical values on things. We are better at making comparisons with preexisting values. This tendency can be exploited. Example: If prosecution demands higher damages, a judge is more likely to award higher damages (due to anchoring?).

    • Weber's Law: "The smallest modify in a stimulus that can exist perceived is proportional to the initial stimulus." This idea tin impact sentence lengths judges choose.

    • More bias examples:

      • Judges with daughters deliver more favorable decisions to women.
      • Judges are less likely to honour bond of the local sports team has lost recently.
      • Time of day may impact whether or not a favorable decision is fabricated.
  • "The choice isn't betwixt a flawed algorithm and some imaginary perfect organization. The just off-white comparison to brand is betwixt the algorithm and what we'd be left with in its absence."

  • A well-designed and transparent algorithm can minimize systematic bias and random error (though information technology still will not be 100% perfect).

  • Hybrid approach that balances and combines algorithmic thinking with human judgement may be the optimal solution.

Chapter 4: Medicine

  • Mod medicine is built upon the finding of patterns in data.

  • Algorithms are very good at pattern recognition, classification and prediction.

  • Limitations of current homo pathologists:

    • Cases at the farthermost are piece of cake to identify. Pathologist diagnostic accuracy is > 96% in these cases.
    • Ambiguous cases (between normal and obviously malignant) pose the biggest diagnostic challenges.
    • "One 2015 study took 72 biopsies of breast tissue, all of which were deemed to contain cells with benign abnormalities (a category towards the heart of the spectrum) and asked 115 pathologists for their opinion…the pathologists only came to the same diagnosis 48 per cent of the time."
  • Image recognition algorithms can exist used to help meliorate accuracy and speed of pathology diagnoses.

  • Neural networks:

    • "An enormous mathematical structure that features a great many knobs and dials. You feed your picture in at one cease, it flows through the structure, and out at the other end comes a guess equally to what that image contains. A probability for each category: Dog; Not dog."

    • "At the beginning, your neural network is a complete pile of junk. It starts with no noesis – no thought of what is or isn't a dog. All the dials and knobs are set up to random. As a consequence, the answers information technology provides are all over the identify – it couldn't accurately recognize an image if its power source depended on information technology. But with every motion picture you feed into it, you tweak those knobs and dials. Slowly, you train information technology."

    • Once a neural network is properly trained and tuned, it can classify previously unseen images as either "dog" or "not dog" based on prior grooming.

    • The neural network is a black box in that all the "knobs and dials" are not well understood by the human observer.

      • Example: An algorithm that differentiates between huskies and wolves doesn't brand a decision on the fauna itself, merely rather looks for the presence of snow.
  • The sensitivity vs. specificity tradeoff:

    • These are statistical measures of functioning in a binary classification test.
    • Ideally yous want every bit few false positives and as few false negatives equally possible.
    • Sensitivity is the statistical measure of correctly identifying true positives. Example: A test for cancer with 100% sensitivity volition identify everyone who has cancer. The downside is that it volition besides outcome in false positives (people identified as having cancer who exercise non have cancer).
    • Specificity is the statistical mensurate of correctly identifying true negatives. Instance: A test for cancer with 100% specificity volition never identify someone every bit having cancer who does non. The downside is that some people with cancer volition be missed.
    • Very few tests excel at both sensitivity and specificity. In that location is usually a tradeoff between the two. Example: An algorithm prioritizing sensitivity volition mean lower specificity.
    • "Human pathologists don't tend to have problems with specificity. They near never mistakenly identify cells every bit malignant when they're not. Simply people exercise struggle a little with sensitivity. It's easy for us to miss tiny tumours…"
  • 1986 Nun Study

    • Found a connection betwixt language skills as a young developed and dementia in old historic period.
    • "Subtle signals virtually our future health can hide in the tiniest, nearly unexpected fragments of information."
    • Suggests the immense potential for medical algorithms combined with rich individual datasets.
  • Important to note that a diagnosis lonely is non sufficient for determine how a item condition volition develop. There is a cost to veering too far into the realm of over-diagnoses and over-treatment.

  • "If yous can take a picture of it and stick a label on it, you can create an algorithm to find it."

  • Another important consideration for healthcare algorithms: balance between privacy and public practiced.

Chapter 5: Cars

  • Autonomous vehicles appears to be a elementary problem (only outputs are speed and direction) just in truth it is extremely complicated due to broad range of environmental circumstances and unpredictable variables (obstacles, terrain, human interactions).

  • "Smart cars" must take inputs from a variety of sensors:

    • Cameras:

      • Strengths: Visual information.
      • Weakness: Cannot "encounter" in 3D on its ain. Non proficient at measurements.
    • GPS:

      • Strengths: Provides high-level location information.
      • Weakness: Prone to uncertainty, not precise.
    • LiDAR (Low-cal Detection and Ranging):

      • Strengths: Measuring distances.
      • Weakness: Cannot find texture or color. Not skillful at long distances.
    • Radar:

      • Strengths: Effective in all weather weather condition, tin can find obstacles at long distances. Can come across through some materials.
      • Weakness: Not good a providing details (e.grand. shape or structure of obstacles).
  • The trick to building a driverless car is to combine the various vehicle sensors into a seamless algorithm.

    • One problem is determining how to prioritize and harness disparate data.

    • Instance: Tumbleweeds stymied vehicles in the countdown DARPA Grand Challenge (contest for democratic vehicles).

      • LiDAR says there is an obstacle alee.
      • Photographic camera agrees with LiDAR that there is an object.
      • Radar, which passes through the flimsy object, says there'due south zippo to worry nigh.
  • "Every measurement taken by the car will have some margin of error: radar readings, the pitch, the roll, the rotations of the wheels, the inertia of the vehicle. Nothing is always 100 per cent reliable. Plus, dissimilar conditions make things worse: rain affects LiDAR; glaring sunlight can bear on the cameras; and long, bumpy drives wreak havoc with accelerometers."

  • Bayes' theorem:

    • A organisation for updating a belief in a hypothesis on the basis of evidence.
    • Theory accepts that y'all can never exist completely certain about the truth but that you can make a "best guess" from the available information.
    • Sharon Bertsch McGrayne: "Bayes runs counter to the deeply held conviction that modern science requires objectivity and precision."
    • "Bayes allows yous to depict sensible conclusions from sketchy observations, from messy, incomplete and approximate data — even from ignorance."
    • Probabilistic inference: The iterative process of using the latest data available (along with Bayes theorem) to infer a truth. In the case of an automobile it would be understanding the location of a vehicle in the real world relative to an array of static and dynamic obstacles surrounding information technology.
  • The trolley problem:

    • A idea experiment in the field of ethics.

    • The trouble: There is a delinquent trolley barreling downward the tracks. Alee of the tracks are five people unaware of the trolley. The trolley is headed direct towards them. You are an observing the scene and standing next to a lever. If you pull the lever, the trolley volition switch to another set of tracks. Even so at that place is ane person on the side track. The dilemma:

      • Do nada and permit the trolley to kill the 5 people.
      • Pull the lever, save the five people and kill the 1 person instead.
    • The trolley problem demonstrates i type of upstanding dilemma facing algorithmic systems.

  • "Driverless applied science is categorized using six dissimilar levels:

    • Level 0: No automation.
    • Level 1: Foot off.
    • Level 2: Cruise command. Hands off.
    • Level iii: Eyes off.
    • Level 4: Geo-fenced democratic vehicles. Brain off.
    • Level 5: Fully automatic.
  • "In 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the subconscious dangers of relying too heavily on automated systems. Build a machine to improve human being performance, she explained, and information technology volition pb – ironically – to a reduction in man power."

    • Example: People don't call back phone numbers anymore.
    • Example: People tin can't navigate city streets without GPS.
  • Author advocates for the hybrid approach: "let the skills of the machine complement the skills of the human."

    • Toyota is building cars with 2 modes: "chauffeur fashion" (an motorcar-driving mode) and "guardian mode" (a backup safe manner while a human drives).
  • Important question that needs to exist asked: "Once you've congenital a flawed algorithm that tin calculate something, should you let it?"

Chapter 6: Crime

  • People committing law-breaking create reliable geographical patterns:

    • Most people operate locally.
    • Serious crimes tend to take place shut to where the perpetrator lives.
  • Distance decay: A criminological phenomenon that posts that the hazard of finding a perpetrator's residence drops as y'all move farther and farther abroad from the scene of a crime.

  • Buffer zone: A geographical pattern in which serial offenders are unlikely to target victims who alive too close.

  • Geoprofiling algorithm:

    • Uses the 2 cardinal patterns of altitude decay and buffer zone.
    • Algorithm outputs a plausible range of locations where a perpetrator might alive.
    • This output in turn helps law to prioritize their list of suspects and focus their limited resources on loftier-probability leads.
  • CompStat data tracking tool:

    • Used to rail criminal offense data and place urban hotspots.
    • Crimes no longer seen equally isolated incidents.
    • Police forces can focus their attending on these areas that commit a disproportionate level of crime.
    • Boston study concluded that 66% of street robberies occurred on just 8% of the metropolis streets.
    • Minneapolis study ended that 50% of the emergency calls to the constabulary came from just 3% of the city expanse.
  • Flag and boost theories for hotspot forecasting:

    • Flag: These fixed factors that represent higher risk for sure crimes. For instance: a highly trafficked thoroughfare vs. a placidity side street.
    • Boost: Current factors in your surrounding area. For instance, once a location is victimized, the chance of information technology or the surrounding properties being victimized increases significantly.
    • Criminologists found that earthquake patters and aftershocks accept proven a useful model for understanding criminal offence "aftershocks."
  • PredPol (aka Predictive Policing):

    • Predictive policing cannot target individual people simply geography.
    • Information technology can merely predict risk of time to come events not the events themselves.
    • Furthermore the task of reducing crime based on the predictive modeling is a completely separate problem.
    • Many PredPol algorithms are proprietary "black box" solutions with no transparency as to algorithmic biases.
  • Feedback loops:

    • By acting on a predictive algorithm you risk reinforcing the presence of whatever it is yous are tracking.
    • Example: By using a "cops-on-the-dots" tactic where police patrols are sent to predicted hotspots, the presence of police in a specific location will observe and run across crime simply by being in that location. In other words, the feedback loop creates a self-fulfilling prophecy.
  • Facial recognition algorithms are becoming important for modern policing just come with a diverseness of problems.

    • Humans and computers are not 100% accurate at identifying people.
    • Resemblance and identity are non the same thing.
    • Apple'south Face ID organization in non flawless: Information technology can be fooled by twins, siblings and children.
  • Algorithms in law enforcement require a trade-off between privacy and protection, fairness and safety.

    • Are simulated positives ever acceptable?

Chapter 7: Art

  • Social proof: "Whenever we haven't got enough data to make decisions for ourselves, we have a habit of copying the behaviour of those around us."

    • Of import factor in the popularity of some works of art.
    • Popularity can serve as a proxy for quality.
    • Quality and luck as well accept roles to play in the success of art.
  • Algorithms have a difficult time predicting popularity:

    • Novelty is an important cistron: "we're put off by the bland, just also detest the radically unfamiliar." Striking the right balance is difficult.
    • Algorithms can quantify the similarity of whatsoever has gone before.
    • Spotify and Netflix can recommend content that is "good enough to insure you confronting disappointment."
  • EMI (Experiments in Musical Intelligence):

    • Calculator algorithm that generates music that humans have mistaken every bit composed by a human.
    • "Notwithstanding cute EMI's music may sound, it is based on a pure recombination of existing work. It's mimicking the patterns institute in Bach'due south music, rather than actually composing whatsoever music itself."
  • "True art can't be created by accident. At that place are boundaries to the reach of algorithms. Limits to what can be quantified. Amidst all of the staggeringly impressive, mind-boggling things that data and statistics can tell me, how it feels to be human isn't ane of them."

Determination

  • Solving problems is not an end betoken. Algorithms create as many new issues as solve onetime ones.

    • Privacy considerations.
    • Bias and discrimination.
    • Accountability.
    • Transparency.
    • Fairness.
  • "Thinking of algorithms every bit some kind of authorisation is exactly where nosotros're going wrong."
  • "Our reluctance to question the power of an algorithm has opened the door to people who wish to exploit us."
  • We must accept that perfection does not exist. Any system volition showroom some kind of bias.
  • Writer advocates for a hybrid approach in which algorithms complement human capabilities and human oversight and intuition is always closely linked to the decision-making process.
  • "In the age of the algorithm, humans have never been more important."

danielssairing.blogspot.com

Source: https://mentalpivot.com/book-notes-hello-world-being-human-in-the-age-of-algorithms-by-hannah-fry/

0 Response to "The Art of Being Human Chapter 5 Textbook Page"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel