usually the teacher says, “i don’t condone this, but you should know about it”

Fantastic description of modern illiteracy by Henry Giroux:

Illiteracy is no longer restricted to populations immersed in poverty with little access to quality education; nor does it only suggest the lack of proficient skills enabling people to read and write with a degree of understanding and fluency. More profoundly, illiteracy is also about refusing to act from a position of thoughtfulness, informed judgment, and critical agency.

Illiteracy has become a political weapon and form of political repression that works to render critical agency inoperable, and restages power as a mode of domination. Illiteracy in the service of violence now functions to depoliticize people by making it difficult for individuals to develop informed judgments, analyze complex relationships and draw upon a range of sources to understand how power works and how they might be able to shape the forces that bear down on their lives. As a depoliticizing force, illiteracy works to make people powerless, and reinforces their willingness to accept being governed rather than learn how to govern.

This mode of illiteracy now constitutes the modus operandi of a society that both privatizes and kills the imagination by poisoning it with falsehoods, consumer fantasies, data loops and the need for instant gratification. This is a mode of illiteracy and education that has no language for relating the self to public life, social responsibility or the demands of citizenship. It is important to recognize that the prevalence of such manufactured illiteracy is not simply about the failure of colleges and universities to create critical and active citizens. It is about an authoritarian society that eliminates public spheres that make thinking possible while imposing a culture of fear in which there is the looming threat that anyone who holds power accountable will be punished. At stake here is not only a crisis of education, memory, ethics and agency but a crisis that reaches into the very foundation of a strong democracy.

I learned how to do the type of analysis on this blog in school. I don’t know what happens in freshman composition anymore, but I know that in about 2003 it included Roland Barthes and The Death of the Author. The cool people talked about semiotics. True story:

It’s almost shocking, a world where Mark Zuckerberg and Bill Gates’ opinions about education count, where kids are given iPads. How far we’ve fallen.

Direct experience, not mediated.

 

Now that we can’t live without our smartphones, it’s easy to forget that technology was opposed. This was Bill Joy of Sun Microsystems 17 years ago. Kids born in 2000 would have no concept:

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Isn’t that creepy, now that machine learning algorithms determine elections?

“What we’re talking about here is a means of mind control on a massive scale that there is no precedent for in human history.” That may sound hyperbolic, but Robert Epstein says it’s not an exaggeration. Epstein, a research psychologist at the American Institute for Behavioral Research in Vista, California, has found that the higher a politician ranks on a page of Internet search results, the more likely you are to vote for them.

“I have a lot of faith in the methods they’ve used, and I think it’s a very rigorously conducted study,” says Nicholas Diakopoulos, a computer scientist at the University of Maryland, College Park, who was not involved in the research. “I don’t think that they’ve overstated their claims.”…

In a third experiment, the team tested its hypothesis in a real, ongoing election: the 2014 general election in India. After recruiting a sample of 2150 undecided Indian voters, the researchers repeated the original experiment, replacing the Australian candidates with the three Indian politicians who were actually running at the time. The results of the real world trial were slightly less dramatic—an outcome that researchers attribute to voters’ higher familiarity with the candidates. But merely changing which candidate appeared higher in the results still increased the number of undecided Indian voters who would vote for that candidate by 12% or more compared with controls. And once again, awareness of the manipulation enhanced the effect.

A few percentage points here and there may seem meager, but the authors point out that elections are often won by margins smaller than 1%. If 80% of eligible voters have Internet access and 10% of them are undecided, the search engine effect could convince an additional 25% of those undecided to vote for a target candidate, the team reports online this week in the Proceedings of the National Academy of Sciences. That type of swing would determine the election outcome, as long as the expected win margin was 2% or less. “This is a huge effect,” Epstein says. “It’s so big that it’s quite dangerous.”

But perhaps the most concerning aspect of the findings is that a search engine doesn’t even have to intentionally manipulate the order of results for this effect to manifest. Organic search algorithms already in place naturally put one candidate’s name higher on the list than others. This is based on factors like “relevance” and “credibility” (terms that are closely guarded by developers at Google and other major search engines). So the public is already being influenced by the search engine manipulation effect, Epstein says. “Without any intervention by anyone working at Google, it means that Google’s algorithm has been determining the outcome of close elections around the world.”

Presumably Google isn’t intentionally tweaking its algorithms to favor certain presidential candidates, but Epstein says it would extremely difficult to tell if it were. He also points out that the Internet mogul will benefit more from certain election outcomes than others.

And according to Epstein, Google is very aware both of the power it wields, as well as the research his team is doing: When the team recruited volunteers from the Internet in the second experiment, two of the IP addresses came from Google’s head office, he says.

“It’s easy to point the finger at the algorithm because it’s this supposedly inert thing, but there are a lot of people behind the algorithm,” Diakopoulos says. “I think that it does pose a threat to the legitimacy of the democracy that we have. We desperately need to have a public conversation about the role of these systems in the democratic processes.”

Bill Joy continued:

In the book, you don’t discover until you turn the page that the author of this passage is Theodore Kaczynski – the Unabomber. I am no apologist for Kaczynski. His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber’s next target.

Kaczynski’s actions were murderous and, in my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.

Kaczynski’s dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy’s law – “Anything that can go wrong, will.” (Actually, this is Finagle’s law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved…

Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

My life has been driven by a deep need to ask questions and find answers. When I was 3, I was already reading, so my father took me to the elementary school, where I sat on the principal’s lap and read him a story. I started school early, later skipped a grade, and escaped into books – I was incredibly motivated to learn. I asked lots of questions, often driving adults to distraction.

As a teenager I was very interested in science and technology. I wanted to be a ham radio operator but didn’t have the money to buy the equipment. Ham radio was the Internet of its time: very addictive, and quite solitary. Money issues aside, my mother put her foot down – I was not to be a ham; I was antisocial enough already.

I may not have had many close friends, but I was awash in ideas…

I excelled in mathematics in high school, and when I went to the University of Michigan as an undergraduate engineering student I took the advanced curriculum of the mathematics majors. Solving math problems was an exciting challenge, but when I discovered computers I found something much more interesting: a machine into which you could put a program that attempted to solve a problem, after which the machine quickly checked the solution. The computer had a clear notion of correct and incorrect, true and false. Were my ideas correct? The machine could tell me. This was very seductive…

But while I was aware of the moral dilemmas surrounding technology’s consequences in fields like weapons research, I did not expect that I would confront such issues in my own field, or at least not so soon.

Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.

Donald Trump used a MOAB:

It’s important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima.

In November 1945, three months after the atomic bombings, Oppenheimer stood firmly behind the scientific attitude, saying, “It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences.”…

Two years later, in 1948, Oppenheimer seemed to have reached another stage in his thinking, saying, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge they cannot lose.”

Isn’t it remarkable that the head of a big technology company passed around Unabomber quotes among his friends and wrote about it sympathetically in a national publication? That’s the difference education makes. High school debate and, for that matter, the trivium, taught about arguing things from both sides. We have standardized tests, now.

Even earlier in the mechanization process, Martin Heidegger:

Technology is not equivalent to the essence of technology. When we are seeking the essence of “tree,” we have to become aware that That which pervades every tree, as tree, is not itself a tree that can be encountered among all the other trees.

Likewise, the essence of technology is by no means anything technological. Thus we shall never experience our relationship to the essence of technology so long as we merely conceive and push forward the technological, put up with it, or evade it. Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology…

But where have we strayed to? We are questioning concerning technology, and we have arrived now at aletheia, at revealing. What has the essence of technology to do with revealing? The answer: everything. For every bringing-forth is grounded in revealing. Bringing-forth, indeed, gathers within itself the four modes of occasioning – causality – and rules them throughout. Within its domain belong end and means, belongs instrumentality.

Instrumentality is considered to be the fundamental characteristic of technology. If we inquire, step by step, into what technology, represented as means, actually is, then we shall arrive at revealing. The possibility of all productive manufacturing lies in revealing. Technology is therefore no mere means. Technology is a way of revealing. If we give heed to this, then another whole realm for the essence of technology will open itself up to us. It is the realm of revealing, i.e., of truth.

This prospect strikes us as strange. Indeed, it should do so, should do so as persistently as possible and with so much urgency that we will finally take seriously the simple question of what the name “technology” means. The word stems from the Greek. Technikon means that which belongs to techne. We must observe two things with respect to the meaning of this word. One is that techne is the name not only for the activities and skills of the craftsman, but also for the arts of the mind and the fine arts. Techne belongs to bringing-forth, to poiesis; it is something poietic.

The other point that we should observe with regard to techne is even more important. From earliest times until Plato the word techne is linked with the word episteme. Both words are names for knowing in the widest sense. They mean to be entirely at home in something, to understand and be expert in it. Such knowing provides an opening up. As an opening up it is a revealing…

What is modern technology? It too is a revealing. Only when we allow our attention to rest on this fundamental characteristic does that which is new in modern technology show itself to us. And yet the revealing that holds sway throughout modern technology does not unfold into a bringing-forth in the sense of poiesis. The revealing that rules in modern technology is a challenging [Herausfordern], which puts to nature the unreasonable demand that it supply energy that can be extracted and stored as such. But does this not hold true for the old windmill as well? No. Its sails do indeed turn in the wind; they are left entirely to the wind’s blowing. But the windmill does not unlock energy from the air currents in order to store it. In contrast, a tract of land is challenged into the putting out of coal and ore. The earth now reveals itself as a coal mining district, the soil as a mineral deposit. The field that the peasant formerly cultivated and set in order [bestellte] appears differently than it did when to set in order still meant to take care of and to maintain. The work of the peasant does not challenge the soil of the field. In the sowing of the grain it places the seed in the keeping of the forces of growth and watches over its increase. But meanwhile even the cultivation of the field has come under the grip of another kind of setting-in-order, which sets upon [stellt] nature.

It sets upon it in the sense of challenging it. Agriculture is now the mechanized food industry. Air is now set upon to yield nitrogen, the earth to yield ore, ore to yield uranium, for example; uranium is set upon to yield atomic energy, which can be released either for destruction or for peaceful use…

Only to the extent that man for his part is already challenged to exploit the energies of nature can this ordering revealing happen. If man is challenged, ordered, to do this, then does not man himself belong even more originally than nature within the standing-reserve? The current talk about human resources, about the supply of patients for a clinic, gives evidence of this. The forester who, in the wood, measures the felled timber and to all appearances walks the same forest path in the same way as did his grandfather is today commanded by profit-making in the lumber industry, whether he knows it or not. He is made subordinate to the orderability of cellulose, which for its part is challenged forth by the need for paper, which is then delivered to newspapers and illustrated magazines. The latter, in their turn, set public opinion to swallowing what is printed, so that a set configuration of opinion becomes available on demand [anticipates Chomsky!] Yet precisely because man is challenged more originally than are the energies of nature, i.e., into the process of ordering, he never is transformed into mere standing reserve. Since man drives technology forward, he takes part in ordering as a way of revealing. But the unconcealment itself, within which ordering unfolds, is never a human handiwork, any more than is the realm through which man is already passing every time he as a subject relates to an object.

There’s something unhealthy about the mindset behind modern technology. From Salon, just recently:

“Technology advertising is especially interesting because what it’s doing is saying all technological advances are good and all technology is beneficial to the people who will be lucky enough to adopt it,” John Carroll, assistant professor of Mass Communications as Boston University, who specializes in advertising and media criticism, told Salon. “There’s nothing that says an advertisement needs to point out the downside of a product, but one of the issues here is that the counterbalancing argument that not all innovation is beneficial doesn’t get the kind of exposure that might be helpful to the public.”

Indeed, visit any technology-focused media outlet, or the tech sections of many news organizations, and you’ll see that “gadget porn” videos, hagiographic profiles of startup founders or the regurgitation of lofty growth expectations from Wall Street analysts vastly outnumber critical analyses of technological disruption. The criticisms that do exist tend to focus on ancillary issues, such as Silicon Valley’s dismal lack of workplace diversity, or how innovation is upsetting norms in the labor market, or the class-based digital divide; all are no doubt important topics, but they’re ones that don’t question the overall assumptions that innovation and disruption are at worst harmless if not benevolent.

Carroll says that it’s up to the media, schools and even religious institutions to counterbalance the presumptions made in advertising, whose goal, he points out, is often to portray happiness “through acquisition as opposed to achievement.”

Overall, there have been SUBSTANTIAL setbacks on the ideological warfare front. The mental shift people need to make is just getting more and more counterintuitive for most.

Close