The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew McAfee Creativity, Inc: Overcoming the Unseen Forces that Stand in the Way of True Inspiration by Ed Catmull with Amy Wallace Hack Attack: How the Truth Caught up with Rupert Murdoch by Nick Davies House of Debt: How They (and You) Caused the Great Recession, and How We Can Prevent It From Happening Again by Atif Mian and Amir Sufi Capital in the Twenty-First Century by Thomas Piketty Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance by Julia Angwin

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies

By Erik Brynjolfsson and Andrew McAfee

Excerpted from The Second Machine Age by Erik Brynjolfsson and Andrew McAfee, published by W. W. Norton Company. Copyright 2014. All rights reserved.

We wrote this book because we got confused. For years we have studied the impact of digital technologies like computers, software, and communications networks, and we thought we had a decent understanding of their capabilities and limitations. But over the past few years, they started surprising us. Computers started diagnosing diseases, listening and speaking to us, and writing high-quality prose, while robots started scurrying around warehouses and driving cars with minimal or no guidance. Digital technologies had been laughably bad at a lot of these things for a long time— then they suddenly got very good. How did this happen? And what were the implications of this progress, which was astonishing and yet came to be considered a matter of course?

FT & McKinsey Business Book of the Year Award 2014

Business Book of the Year Award 2014

The prize will be awarded to the book that is judged to have provided the most compelling and enjoyable insight into modern business issues

Further reading

We decided to team up and see if we could answer these questions. We did the normal things business academics do: read lots of papers and books, looked at many different kinds of data, and batted around ideas and hypotheses with each other. This was necessary and valuable, but the real learning, and the real fun, started when we went out into the world. We spoke with inventors, investors, entrepreneurs , engineers, scientists, and many others who make technology and put it to work.

Thanks to their openness and generosity, we had some futuristic experiences in today’s incredible environment of digital innovation. We’ve ridden in a driverless car, watched a computer beat teams of Harvard and MIT students in a game of Jeopardy!, trained an industrial robot by grabbing its wrist and guiding it through a series of steps, handled a beautiful metal bowl that was made in a 3D printer, and had countless other mind-melting encounters with technology.

This work led us to three broad conclusions.

The first is that we’re living in a time of astonishing progress with digital technologies— those that have computer hardware, software, and networks at their core. These technologies are not brand-new; businesses have been buying computers for more than half a century, and Time magazine declared the personal computer its “Machine of the Year ” in 1982. But just as it took generations to improve the steam engine to the point that it could power the Industrial Revolution, it’s also taken time to refine our digital engines.

We’ll show why and how the full force of these technologies has recently been achieved and give examples of its power. “Full,” though, doesn’t mean “mature.” Computers are going to continue to improve and to do new and unprecedented things. By “full force,” we mean simply that the key building blocks are already in place for digital technologies to be as important and transformational to society and the economy as the steam engine. In short, we’re at an inflection point— a point where the curve starts to bend a lot—because of computers. We are entering a second machine age.

Our second conclusion is that the transformations brought about by digital technology will be profoundly beneficial ones. We’re heading into an era that won’t just be different; it will be better, because we’ll be able to increase both the variety and the volume of our consumption. When we phrase it that way— in the dry vocabulary of economics— it almost sounds unappealing. Who wants to consume more and more all the time? But we don’t just consume calories and gasoline. We also consume information from books and friends, entertainment from superstars and amateurs, expertise from teachers and doctors, and countless other things that are not made of atoms. Technology can bring us more choice and even freedom.

When these things are digitized— when they’re converted into bits that can be stored on a computer and sent over a network— they acquire some weird and wonderful properties. They’re subject to different economics, where abundance is the norm rather than scarcity. As we’ll show, digital goods are not like physical ones, and these differences matter.

Of course, physical goods are still essential , and most of us would like them to have greater volume, variety, and quality. Whether or not we want to eat more, we’d like to eat better or different meals. Whether or not we want to burn more fossil fuels, we’d like to visit more places with less hassle. Computers are helping accomplish these goals, and many others. Digitization is improving the physical world, and these improvements are only going to become more important. Among economic historians there’s wide agreement that, as Martin Weitzman puts it, “the long-term growth of an advanced economy is dominated by the behavior of technical progress.” As we’ll show, technical progress is improving exponentially.

Our third conclusion is less optimistic: digitization is going to bring with it some thorny challenges. This in itself should not be too surprising or alarming; even the most beneficial developments have unpleasant consequences that must be managed. The Industrial Revolution was accompanied by soot-filled London skies and horrific exploitation of child labor. What will be their modern equivalents? Rapid and accelerating digitization is likely to bring economic rather than environmental disruption, stemming from the fact that as computers get more powerful, companies have less need for some kinds of workers. Technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead. As we’ll demonstrate, there’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there’s never been a worse time to be a worker with only ‘ordinary’ skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate.

Over time, the people of England and other countries concluded that some aspects of the Industrial Revolution were unacceptable and took steps to end them (democratic government and technological progress both helped with this). Child labor no longer exists in the UK, and London air contains less smoke and sulfur dioxide now than at any time since at least the late 1500s. 13 The challenges of the digital revolution can also be met, but first we have to be clear on what they are. It’s important to discuss the likely negative consequences of the second machine age and start a dialogue about how to mitigate them— we are confident that they’re not insurmountable. But they won’t fix themselves, either. We’ll offer our thoughts on this important topic in the chapters to come.

So this is a book about the second machine age unfolding right now— an inflection point in the history of our economies and societies because of digitization. It’s an inflection point in the right direction— bounty instead of scarcity, freedom instead of constraint—but one that will bring with it some difficult challenges and choices.

Back to the top

————————————————————————————-

Creativity, Inc: Overcoming the Unseen Forces that Stand in the Way of True Inspiration

By Ed Catmull with Amy Wallace

Electronically reproduced from Creativity, Inc: Overcoming the Unseen Forces that Stand in the Way of True Inspiration, by Ed Catmull with Amy Wallace, Bantam Press/Transworld Publishers (UK), Random House (US) © 2014. All rights reserved.

Introduction: Lost and Found

Every morning, as I walk into Pixar Animation Studios— past the twenty- foot- high sculpture of Luxo Jr., our friendly desk lamp mascot, through the double doors and into a spectacular glass-ceilinged atrium where a man- sized Buzz Lightyear and Woody, made entirely of Lego bricks, stand at attention, up the stairs past sketches and paintings of the characters that have populated our fourteen films— I am struck by the unique culture that defines this place. Although I’ve made this walk thousands of times, it never gets old.

Built on the site of a former cannery, Pixar’s fifteen- acre campus, just over the Bay Bridge from San Francisco, was designed, inside and out, by Steve Jobs. (Its name, in fact, is The Steve Jobs Building.) It has well- thought- out patterns of entry and egress that encourage people to mingle, meet, and communicate. Outside, there is a soccer field, a volleyball court, a swimming pool, and a six-hundred- seat amphitheater. Sometimes visitors misunderstand the place, thinking it’s fancy for fancy’s sake. What they miss is that the unifying idea for this building isn’t luxury but community. Steve wanted the building to support our work by enhancing our ability to collaborate.

The animators who work here are free to— no, encouraged to— decorate their work spaces in whatever style they wish. They spend their days inside pink dollhouses whose ceilings are hung with miniature chandeliers, tiki huts made of real bamboo, and castles whose meticulously painted, fifteen- foot- high styrofoam turrets appear to be carved from stone. Annual company traditions include “Pixarpalooza,” where our in- house rock bands battle for dominance, shredding their hearts out on stages we erect on our front lawn.

The point is, we value self- expression here. This tends to make a big impression on visitors, who often tell me that the experience of walking into Pixar leaves them feeling a little wistful, like something is missing in their work lives— a palpable energy, a feeling of collaboration and unfettered creativity, a sense, not to be corny, of possibility. I respond by telling them that the feeling they are picking up on— call it exuberance or irreverence, even whimsy— is integral to our success.

But it’s not what makes Pixar special. What makes Pixar special is that we acknowledge we will always have problems, many of them hidden from our view; that we work hard to uncover these problems, even if doing so means making ourselves uncomfortable; and that, when we come across a problem, we marshal all of our energies to solve it. This, more than any elaborate party or turreted workstation, is why I love coming to work in the morning. It is what motivates me and gives me a definite sense of mission. There was a time, however, when my purpose here felt a lot less clear to me. And it might surprise you when I tell you when.

On November 22, 1995, Toy Story debuted in America’s theaters and became the largest Thanksgiving opening in history. Critics heralded it as “inventive” (Time), “brilliant” and “exultantly witty” (The New York Times), and “visionary” (Chicago Sun- Times). To find a movie worthy of comparison, wrote The Washington Post, one had to go back to 1939, to The Wizard of Oz.

The making of Toy Story— the first feature film to be animated entirely on a computer— had required every ounce of our tenacity, artistry, technical wizardry, and endurance. The hundred or so men and women who produced it had weathered countless ups and downs as well as the ever- present, hair- raising knowledge that our survival depended on this 80- minute experiment. For five straight years, we’d fought to do Toy Story our way. We’d resisted the advice of Disney executives who believed that since they’d had such success with musicals, we too should fill our movie with songs. We’d rebooted the story completely, more than once, to make sure it rang true. We’d worked nights, weekends, and holidays— mostly without complaint. Despite being novice filmmakers at a fledgling studio in dire financial straits, we had put our faith in a simple idea: If we made something that we wanted to see, others would want to see it, too. For so long, it felt like we had been pushing that rock up the hill, trying to do the impossible. There were plenty of moments when the future of Pixar was in doubt. Now, we were suddenly being held up as an example of what could happen when artists trusted their guts.

Toy Story went on to become the top- grossing film of the year and would earn $358 million worldwide. But it wasn’t just the numbers that made us proud; money, after all, is just one measure of a thriving company and usually not the most meaningful one. No, what I found gratifying was what we’d created. Review after review focused on the film’s moving plotline and its rich, three- dimensional characters— only briefly mentioning, almost as an aside, that it had been made on a computer. While there was much innovation that enabled our work, we had not let the technology overwhelm our real purpose: making a great film.

On a personal level, Toy Story represented the fulfilment of a goal I had pursued for more than two decades and had dreamed about since I was a boy. Growing up in the 1950s, I had yearned to be a Disney animator but had no idea how to go about it. Instinctively, I realize now, I embraced computer graphics— then a new field— as a means of pursuing that dream. If I couldn’t animate by hand, there had to be another way. In graduate school, I’d quietly set a goal of making the first computer- animated feature film, and I’d worked tirelessly for twenty years to accomplish it.

Now, the goal that had been a driving force in my life had been reached, and there was an enormous sense of relief and exhilaration— at least at first. In the wake of Toy Story’s release, we took the company public, raising the kind of money that would ensure our future as an independent production house, and began work on two new feature- length projects, A Bug’s Life and Toy Story 2. Everything was going our way, and yet I felt adrift. In fulfilling a goal, I had lost some essential framework. Is this really what I want to do? I began asking myself. The doubts surprised and confused me, and I kept them to myself. I had served as Pixar’s president for most of the company’s existence. I loved the place and everything that it stood for. Still, I couldn’t deny that achieving the goal that had defined my professional life had left me without one. Is this all there is? I wondered. Is it time for a new challenge?

It wasn’t that I thought Pixar had “arrived” or that my work was done. I knew there were major obstacles in front of us. The company was growing quickly, with lots of shareholders to please, and we were racing to put two new films into production. There was, in short, plenty to occupy my working hours. But my internal sense of purpose— the thing that had led me to sleep on the floor of the computer lab in graduate school just to get more hours on the mainframe, that kept me awake at night, as a kid, solving puzzles in my head, that fueled my every workday— had gone missing. I’d spent two decades building a train and laying its track. Now, the thought of merely driving it struck me as a far less interesting task. Was making one film after another enough to engage me? I wondered. What would be my organizing principle now?

It would take a full year for the answer to emerge.

Back to the top

————————————————————————————-

Hack Attack: How the Truth Caught up with Rupert Murdoch

By Nick Davies

Electronically reproduced from Hack Attack: How the Truth Caught up with Rupert Murdoch, by Nick Davies, University of Chicago Press. Copyright © 2014. All rights reserved.

For anybody who wants to understand why things went so wrong in British newspapers, there is a very simple answer which consists of only two words – ‘Kelvin’ and ‘MacKenzie’.

When Rupert Murdoch made him editor of the Sun in 1981, MacKenzie effectively took the book of journalistic rules and flushed it down one of the office’s famously horrible toilets. From then, until finally he was removed from his post in 1994, MacKenzie’s world ran on very simple lines: anything goes, nobody cares, nothing can stop us now. Appointed by a man who prided himself on being an outsider and on pushing boundaries, Kelvin MacKenzie was an editor who precisely matched his employer’s approach to life – a journalist uninterested even in the most fundamental rule of all, to try to tell the truth. As he later told a seminar organised by Lord Justice Leveson, MacKenzie had a simple approach to fact-checking:‘Basically my view was that if it sounded right, it was probably right and therefore we should lob it in.’

This was the editor who referred to the office computers as ‘scamulators’ and who scamulated a long list of phoney stories, including most notoriously the ‘world exclusive interview’ with Marica McKay, widow of a British soldier killed in the Falklands, who, in truth, had given the Sun no interview at all; the vicious libel on Liverpool football fans, accused by the Sun of pissing on police and picking the pockets of the dead at the Hillsborough stadium disaster; the fictional front page claiming that the comedian Freddie Starr had eaten a live hamster in a sandwich; the completely false story about Elton John paying to have sex with a rent boy. For Lord Justice Leveson, MacKenzie recalled how the Sun had paid Elton John £1 million in damages for that particular piece of scamulation and how he had then reflected on the Sun’s attempts to check their facts and succeeded in drawing the most perverse of conclusions. ‘So much for checking a story,’ he grumbled. ‘I never did it again.’

The PCC Code of Practice said journalists should not invade people’s privacy. MacKenzie simply and baldly said that he ‘had no regard for it’. As one particularly sensitive example of protected privacy, the law said that newspapers should not publicly identify the victims of rape – MacKenzie went right ahead and published a front-page photograph of a woman who had been raped with special violence.

The rule-breaking was taken to a peak after the coincidence that in the same year that MacKenzie became editor of the Sun, the United

Kingdom saw the opening chapter of what was to become the biggest human-interest story in the world. In 1981, the royal family acquired a new princess. For better and worse, the Diana story busted straight through the wall of deference which previously had concealed most of the private lives of those who lived in the Palace. MacKenzie’s Sun led the way. If that meant publishing photographs of Diana six months pregnant in a bikini, taken on a telephoto lens without her knowledge, then that was fine because he was merely showing ‘a legitimate interest in the royal family as living, breathing people’. If it meant making up stories, that too was no problem. In their brilliant account of life at the Sun, Stick it Up Your Punter!, Peter Chippindale and Chris Horrie describe MacKenzie telling his royal correspondent, Harry Arnold, that he needed a front-page story about the royal family every Monday morning, adding: ‘Don’t worry if it’s not true – so long as there’s not too much of a fuss about it afterwards.’

Once Diana’s life had been dragged into the newsroom and converted into raw material to be exploited without limit, the private lives of other public figures were hauled in behind her. And then the private lives of private figures.There were no boundaries. MacKenzie adopted a crude populist view of the world, designed simply to please an imaginary Sun reader, defined in his own words as ‘the bloke you see in the pub, a right old fascist, wants to send the wogs back, buy his poxy council house. He’s afraid of the unions, afraid of the Russians, hates the queers and the weirdos and drug dealers.’

MacKenzie not only produced a paper to please this imaginary bigot, he himself was the bigot. Chippindale and Horrie record his habit of referring to gay men as ‘botty burglars’ and ‘pooftahs’. He published a story falsely quoting a psychologist who was supposed to have said ‘All homosexuals should be exterminated to stop the spread of AIDS.’ He was no better with race, for example dismissing Richard Attenborough’s film about Gandhi as ‘a lot of fucking bollocks about an emaciated coon’.

From this position it followed logically that he abandoned any pretence of fair reporting about the governing of the country.A couple of hundred years ago, British journalists had disgraced their trade by selling their collaboration to politicians – for a fee, these ‘hacks’ would

write whatever their paymasters required. In spite of all the twists and turns through which journalism had tried to redeem itself, MacKenzie acted as an unpaid political hack, turning the Sun into a weapon to attack all those who might upset his ‘right old fascist’ reader, scamulating as he went.

MacKenzie inflicted his ways on those who worked for him.There probably never was a rule against bullying, but if there was, MacKenzie would have shattered it. He was an Olympic-gold-medal-winning office tyrant. Colin Dunne, a feature writer who had worked on the Sun before MacKenzie took over, described the regime of relentless labour which he introduced: ‘Whenever Kelvin saw an empty office, or even an empty chair, he was overcome with the fear that someone some- where was having a good time. And it was his personal mission to put a stop to it.’Chippindale and Horrie record the advice which MacKenzie offered to an elderly graphic artist whose presence upset him: ‘Do us all a favour, you useless cunt: cut your throat.’

The infection spread through the Sun and was then compounded as those who had served under him moved to rival titles, taking his reckless ways with them. And in newsrooms without rules, why would anybody obey the law?

Back to the top

————————————————————————————-

House of Debt: How They (and You) Caused the Great Recession, and How We Can Prevent It From Happening Again

By Atif Mian and Amir Sufi

You are reading copyrighted material published by University of Chicago Press. Unauthorized posting, copying, or distributing of this work except as permitted under U.S. copyright law is illegal and injures the author and publisher.

The college graduating class of 2010 had little time to celebrate their freshly minted diplomas. The severe recession smacked them with the harsh reality of looking for a job in a horrible labor market. For college graduates at the time, the unemployment rate was over percent.1 When they entered college in 2006, none of them could have predicted such a disastrous situation. Since 1989, the unemployment rate for college graduates had never exceeded 8 percent.

The bleak jobs picture threatened the livelihood of recent graduates for another reason: many left college saddled with enormous student-debt burdens. Driven by the allure of a decent salary with a college degree, Americans borrowed to go to school. Outstanding student debt doubled from 2005 to 2010, and by 2012 total student debt in the U.S. economy surpassed $1 trillion.2 The Department of Education estimated that two-thirds of bachelor’s degree recipients borrowed money from either the government or private lenders.3

Unfortunately for the 2010 graduates, debt contracts don’t care what the labor market looks like when you graduate. Regardless of whether a graduate can find a well-paying job, they demand payment. Student debt is especially pernicious in this regard because it cannot be discharged in bankruptcy. And the government can garnish your wages or take part of your tax refund or Social Security payments to ensure that they get paid on federal loans.4

The combination of unemployment and the overhang of student debt hampered demand just when the economy needed it most. Recent college graduates with high student debt delayed major purchases, and many were forced to move back in with their parents.5 As Andrew Martin and Andrew Lehren of the New York Times put it, “Growing student debt hangs over the economic recovery like a dark cloud for a generation of college graduates and indebted dropouts.”6 Many reconsidered the benefits of college altogether. Ezra Kazee, an unemployed college graduate with $29,000 of student debt, was interviewed for a story on student- loan burdens. “You often hear the quote that you can’t put a price on ignorance,” he said. “But with the way higher education is going, ignorance is looking more and more affordable every day.”7

The Risk- Sharing Principle

The student debt debacle is another example of the financial system failing us. Despite the high cost of a college degree, most economists agree that it is valuable because of the wage premium it commands. Yet young Americans increasingly recognize that student debt unfairly forces them to bear a large amount of aggregate economic risk. Debt protects the lender even if the job market deteriorates, but graduates are forced to cobble money together to pay down the loan. Forcing young Americans to bear this risk makes no economic sense. College graduates were thrown into dire circumstances just because they happened to be born in 1988, twenty-two years before the most disastrous labor market in recent history. Why should they be punished for that? Rather than facilitate the acquisition of valuable knowledge, a financial system built on debt increasingly discourages college aspirations.

Both student debt and mortgages illustrate a broader principle. If we’re going to fix the financial system— if we are to avoid the painful boom- and- bust episodes that are becoming all too frequent—we must address the key problem: the inflexibility of debt contracts. When someone finances the purchase of a home or a college education, the contract they sign must allow for some sharing of the downside risk. The contract must be made contingent on economic outcomes so that the financial system helps us. It must resemble equity more than debt.8

This principle can be seen easily in the context of education. Student loans should be made contingent on measures of the job market at the time the student graduates. For example, in both Australia and the United Kingdom, students pay only a fixed percentage of their income to pay down student loans. If the student cannot find a job, she pays nothing on her student loan. For reasons we will discuss, we believe a better system would make the loan payment contingent on a broader measure of the labor market rather than the individual’s income. But the principle is clear: recent graduates should be protected if they face a dismal job market upon completing their degrees.9 In return, they should compensate the lender more if they do well.

The disadvantage of debt in the context of student loans is not a radical leftist idea. Even Milton Friedman recognized problems with student debt. As he put it, “A further complication is introduced by the inappropriateness of fixed money loans to finance investment in training. Such investment necessarily involves much risk. The average expected return may be high, but there is wide variation about the average. Death and physical incapacity is one obvious source of variation but probably much less important than differences in ability, energy, and good fortune.”10 Friedman’s proposal was similar to ours: he believed that student- loan financing should be more “equity- like,” where payments were automatically reduced if the student graduates into a weak job environment.

Making financial contracts in general more equity- like means better risk sharing for the entire economy. When house prices rise, both the lender and borrower would benefit. Likewise, when house prices crash, both would share the burden. This is not about forcing lenders to unfairly bear only downside risk. This is about promoting contracts in which the financial system gets both the benefit of the upside and bears some cost on the downside.

Back to the top

————————————————————————————-

Capital in the Twenty-First Century

By Thomas Piketty

Electronically reproduced from Capital in the Twenty-First Century, by Thomas Piketty, Cambridge, Mass.: Harvard University Press. Copyright © 2014 by the President and Fellows of Harvard College. All rights reserved.

“Social distinctions can be based only on common utility.”—Declaration of the Rights of Man and the Citizen, article 1, 1789

The distribution of wealth is one of today’s most widely discussed and controversial issues. But what do we really know about its evolution over the long term? Do the dynamics of private capital accumulation inevitably lead to the concentration of wealth in ever fewer hands, as Karl Marx believed in the nineteenth century? Or do the balancing forces of growth, competition, and technological progress lead in later stages of development to reduced inequality and greater harmony among the classes, as Simon Kuznets thought in the twentieth century? What do we really know about how wealth and income have evolved since the eighteenth century, and what lessons can we derive from that knowledge for the century now under way?

These are the questions I attempt to answer in this book. Let me say at once that the answers contained herein are imperfect and incomplete. But they are based on much more extensive historical and comparative data than were available to previous researchers, data covering three centuries and more than twenty countries, as well as on a new theoretical framework that affords a deeper understanding of the underlying mechanisms. Modern economic growth and the diffusion of knowledge have made it possible to avoid the Marxist apocalypse but have not modified the deep structures of capital and inequality—or in any case not as much as one might have imagined in the optimistic decades following World War II. When the rate of return on capital exceeds the rate of growth of output and income, as it did in the nineteenth century and seems quite likely to do again in the twenty-first, capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which democratic societies are based. There are nevertheless ways democracy can regain control over capitalism and ensure that the general interest takes precedence over private interests, while preserving economic openness and avoiding protectionist and nationalist reactions. The policy recommendations I propose later in the book tend in this direction. They are based on lessons derived from historical experience, of which what follows is essentially a narrative.

A Debate without Data?

Intellectual and political debate about the distribution of wealth has long been based on an abundance of prejudice and a paucity of fact.

To be sure, it would be a mistake to underestimate the importance of the intuitive knowledge that everyone acquires about contemporary wealth and income levels, even in the absence of any theoretical framework or statistical analysis. Film and literature, nineteenth-century novels especially, are full of detailed information about the relative wealth and living standards of different social groups, and especially about the deep structure of inequality, the way it is justified, and its impact on individual lives. Indeed, the novels of Jane Austen and Honoré de Balzac paint striking portraits of the distribution of wealth in Britain and France between 1790 and 1830. Both novelists were intimately acquainted with the hierarchy of wealth in their respective societies. They grasped the hidden contours of wealth and its inevitable implications for the lives of men and women, including their marital strategies and personal hopes and disappointments. These and other novelists depicted the effects of inequality with a verisimilitude and evocative power that no statistical or theoretical analysis can match.

Indeed, the distribution of wealth is too important an issue to be left to economists, sociologists, historians, and philosophers. It is of interest to everyone, and that is a good thing. The concrete, physical reality of inequality is visible to the naked eye and naturally inspires sharp but contradictory political judgments. Peasant and noble, worker and factory owner, waiter and banker: each has his or her own unique vantage point and sees important aspects of how other people live and what relations of power and domination exist between social groups, and these observations shape each person’s judgment of what is and is not just. Hence there will always be a fundamentally subjective and psychological dimension to inequality, which inevitably gives rise to political conflict that no purportedly scientific analysis can alleviate. Democracy will never be supplanted by a republic of experts—and that is a very good thing.

Nevertheless, the distribution question also deserves to be studied in a systematic and methodical fashion. Without precisely defined sources, methods, and concepts, it is possible to see everything and its opposite. Some people believe that inequality is always increasing and that the world is by definition always becoming more unjust. Others believe that inequality is naturally decreasing, or that harmony comes about automatically, and that in any case nothing should be done that might risk disturbing this happy equilibrium. Given this dialogue of the deaf, in which each camp justifies its own intellectual laziness by pointing to the laziness of the other, there is a role for research that is at least systematic and methodical if not fully scientific. Expert analysis will never put an end to the violent political conflict that inequality inevitably instigates. Social scientific research is and always will be tentative and imperfect. It does not claim to transform economics, sociology, and history into exact sciences. But by patiently searching for facts and patterns and calmly analyzing the economic, social, and political mechanisms that might explain them, it can inform democratic debate and focus attention on the right questions. It can help to redefine the terms of debate, unmask certain preconceived or fraudulent notions, and subject all positions to constant critical scrutiny. In my view, this is the role that intellectuals, including social scientists, should play, as citizens like any other but with the good fortune to have more time than others to devote themselves to study (and even to be paid for it—a signal privilege).

There is no escaping the fact, however, that social science research on the distribution of wealth was for a long time based on a relatively limited set of firmly established facts together with a wide variety of purely theoretical speculations. Before turning in greater detail to the sources I tried to assemble in preparation for writing this book, I want to give a quick historical overview of previous thinking about these issues.

Back to the top

————————————————————————————-

Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance

By Julia Angwin

Excerpted from Dragnet Nation by Julia Angwin, published by Times Books, an imprint of Henry Holt and Company LLC. Copyright 2014 by Julia Angwin. All rights reserved.

Who is watching you?

This was once a question asked only by kings, presidents, and public figures trying to dodge the paparazzi and criminals trying to evade the law. The rest of us had few occasions to worry about being tracked.

But today the anxious question—“who’s watching?”—is relevant to everyone regardless of his or her fame or criminal persuasion. Any of us can be watched at almost any time, whether it is by a Google Street View car taking a picture of our house, or an advertiser following us as we browse the Web, or the National Security Agency logging our phone calls.

Dragnets that scoop up information indiscriminately about everyone in their path used to be rare; police had to set up roadblocks, or retailers had to install and monitor video cameras. But technology has enabled a new era of supercharged dragnets that can gather vast amounts of personal data with little human effort.

These dragnets are extending into ever more private corners of the world.

Consider the relationship of Sharon Gill and Bilal Ahmed, close friends who met on a private online social network called PatientLikeMe.com.

Sharon and Bilal couldn’t be more different. Sharon is a forty-two-year-old single mother who lives in a small town in southern Arkansas. She ekes out a living trolling for treasures at yard sales and selling them at a flea market. Bilal Ahmed, thirty-six years old, is a single, Rutgers-educated man who lives in a penthouse in Sydney, Australia. He runs a chain of convenience stores.

Although they have never met in person, they became close friends on a password-protected online forum for patients struggling with mental health issues. Sharon was trying to wean herself from antidepressant medications. Bilal had just lost his mother and was suffering from anxiety and depression.

From their far corners of the world, they were able to cheer each other up in their darkest hours. Sharon turned to Bilal because she felt she couldn’t confide in her closest relatives and neighbors. “I live in a small town,” Sharon told me. “I don’t want to be judged on this mental illness.”

But in 2010, Sharon and Bilal were horrified to discover they were being watched on their private social network.

It started with a break-in. On May 7, 2010, PatientsLikeMe noticed unusual activity on the “mood” forum where Sharon and Bilal hung out. A new member of the site, using sophisticated software, was attempting to “scrape,” or copy, every single message off PatientsLikeMe’s private online “Mood” and “Multiple Sclerosis” forums.

PatientsLikeMe managed to block and identify the intruder: it was the Nielsen Company, the New York media-research firm. Nielsen monitors online “buzz” for its clients, including major drug makers. On May 18, PatientsLikeMe sent a cease-and-desist letter to Nielsen and notified its members of the break-in. (Nielsen later said it would no longer break into private forums. “It’s something that we decided is not acceptable,” said Dave Hudson, the head of the Nielsen unit involved.)

But there was a twist. PatientsLikeMe used the opportunity to inform members of the fine print they may not have noticed when they signed up. The website was also selling data about its members to pharmaceutical and other companies.

The news was a double betrayal for Sharon and Bilal. Not only had an intruder been monitoring them, but so was the very place that they considered to be a safe space. It was as if someone filmed an Alcoholics Anonymous meeting and AA was mad because that film competed with its own business of videotaping meetings and selling the tapes. “I felt totally violated,” Bilal said.

Even worse, none of it was necessarily illegal. Nielsen was operating in a gray area of the law even as it violated the terms of service at PatientsLikeMe, but those terms are not always legally enforceable. And it was entirely legal for PatientsLikeMe to disclose to its members in its fine print that it would sweep up all their information and sell it.

This is the tragic flaw of “privacy” in the digital age. Privacy is often defined as freedom from unauthorized intrusion. But many of the things that feel like privacy violations are “authorized” in some fine print somewhere.

And yet, in many ways, we have not yet fully consented to these authorized intrusions. Even if it is legal for companies to scoop up information about people’s mental health, is it socially acceptable?

Eavesdropping on Sharon and Bilal’s conversations might be socially acceptable if they were drug dealers under court-approved surveillance. But is sweeping up their conversations as part of a huge dragnet to monitor online “buzz” socially acceptable?

Dragnets that indiscriminately sweep up personal data fall squarely into the gray area between what is legal and what is socially acceptable.

We are living in a Dragnet Nation—a world of indiscriminate tracking where institutions are stockpiling data about individuals at an unprecedented pace. The rise of indiscriminate tracking is powered by the same forces that have brought us the technology we love so much—powerful computing on our desktops, laptops, tablets, and smartphones.

Before computers were commonplace, it was expensive and difficult to track individuals. Governments kept records only of occasions, such as birth, marriage, property ownership, and death. Companies kept records when a customer bought something and filled out a warranty card or joined a loyalty club. But technology has made it cheap and easy for institutions of all kinds to keep records about almost every moment of our lives.

Consider just a few facts that have enabled the transformation. Computer processing power has doubled roughly every two years since the 1970s, enabling computers that were once the size of entire rooms to fit into a pants pocket. And recently, the cost to store data has plummeted from $18.95 for one gigabyte in 2005 to $1.68 in 2012. It is expected to cost under a dollar in a few years.

The combination of massive computing power, smaller and smaller devices, and cheap storage has enabled a huge increase in indiscriminate tracking of personal data. The trackers are not all intruders, like Nielsen. The trackers also include many of the institutions that are supposed to be on our side, such as the government and the companies with which we do business.

Of course, the largest of the dragnets appear to be those operated by the U.S. government. In addition to its scooping up vast amounts of foreign communications, the National Security Agency is also scooping up Americans’ phone calling records and Internet traffic, according to documents revealed in 2013 by the former NSA contractor Edward Snowden.

But the NSA is not alone (although it may be the most effective) in operating dragnets. Governments around the world—from Afghanistan to Zimbabwe—are snapping up surveillance technology, ranging from “massive intercept” equipment to tools that let them remotely hack into people’s phones and computers. Even local and state governments in the United States are snapping up surveillance technology ranging from drones to automated license plate readers that allow them to keep tabs on citizens’ movements in ways never before possible. Local police are increasingly tracking people using signals emitted by their cell phones.

Meanwhile, commercial dragnets are blossoming. AT&T and Verizon are selling information about the location of their cell phone customers, albeit without identifying them by name. Mall owners have started using technology to track shoppers based on the signals emitted by the cell phones in their pockets.

Retailers such as Whole Foods have used digital signs that are actually facial recognition scanners. Some car dealerships are using a service from Dataium that lets them know which cars you have browsed online, if you have given them your e-mail address, before you arrive on the dealership lot.

Online, hundreds of advertisers and data brokers are watching as you browse the Web. Looking up “blood sugar” could tag you as a possible diabetic by companies that profile people based on their medical condition and then provide drug companies and insurers access to that information. Searching for a bra could trigger an instant bidding war among lingerie advertisers at one of the many online auction houses.

And new tracking technologies are just around the corner: companies are building facial recognition technology into phones and cameras, technology to monitor your location is being embedded into vehicles, wireless “smart” meters that gauge the power usage of your home are being developed, and Google has developed Glass, tiny cameras embedded in eyeglasses that allow people to take photos and videos without lifting a finger.

Skeptics say: What’s wrong with all of our data being collected by unseen watchers? Who is being harmed?

Admittedly, it can be difficult to demonstrate personal harm from a data breach. If Sharon or Bilal is denied a job or insurance, they may never know which piece of data caused the denial. People placed on the no-fly list are never informed about the data that contributed to the decision.

But, on a larger scale, the answer is simple: troves of personal data can and will be abused.

Back to the top
Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments