Saturday, January 18, 2014

Obama, Surveillance and Digital Leakage: Bin Laden is dead, and so Is America

Yesterday was a banner day for disquisitions on surveillance. Pres. Obama's speech and a lengthy piece by David Cole on leakers Snowden, Manning, and Assange in The New York Review of Books with their careful analyses and measured judgments arrived almost simultaneously. What a motherlode of intelligence... I mean, sophistication.

How many brilliant speeches have we seen from Obama? Several, by my count, at least since his remarkable summation of the race issue before he was first elected President. He has an excellent nose for this sort of thing - just when it seems that opposing forces are closing in on him, questioning his objectivity, his liberal principles or his political judgment, he times and words his speech in such a way as to take both the moral and intellectual high ground, demonstrating a grasp of both history and the current alignment of political forces. Or so it has been in the past. His speech on surveillance was in some sense a paradigm of balanced judgment - at least if you think balance is always achieved by giving each side their due, identifying with their supposedly admirable goals while distancing yourself from radical views on either side. But maybe that's not what we needed this time.

No firebreathing radical Obama: his list of ways in which he will supposedly reign in the surveillance state he has set up (or expanded) is nothing if not cautious, not to say an empty wave of the wand in the direction of his (former) liberal supporters. Except perhaps for ending the blanket telephone surveillance program "in its current form" - in what new form it will be reinstated remains to be seen - he gave little more than lip service to the demands from consumers, industry and privacy advocates that the massive gathering of private electronic data be curtailed.

Cole, in his review of books by Rahul Sagar and James C. Goodale, carefully assesses the benefits and drawbacks of the classified document releases by Manning and Snowden, and seems to get behind some criteria Goodale has come up with by which leakers can be either praised or condemned for their behavior. Basically, the leaks must reveal real abuses, pose few threats to the security of U.S. operations or individuals, and be carefully tailored in scope to their limited purposes. After reading this I almost wanted to give the man a big hug and say, Thank you, James, for coming up with those judicious guidelines for the illegal release of secret espionage documents. Had Manning and Snowden only had such criteria in hand we would undoubtedly have for our perusal just the right selection of their 2 million or so documents on government spying; no CIA operatives would be in danger; and friendly foreign governments would be unaware that we were poking our noses into their leaders' underwear. Only, what would have compelled the President to address the subject of surveillance in even the unsatisfactory way that he did?

What neither Obama's nor Cole's cautious assessments puts on the table - what indeed both of them conceal, as do almost all other recent conversations on these subjects, is the underlying logic of the politics, programs and documents that comprise the various controversial initiatives. "Those who are troubled by our existing programs are not interested in a repeat of 9/11", opines the President; and who could disagree with that? (Except that presumably most terrorists are also "troubled by our existing programs" - but we can let that slide.) The problem with this statement is that it assumes we all share a basic principle that in fact I doubt we share; indeed, that even Cole and Obama may not share; and yet this principle, so consistently shown to be inoperative over the course of history, is the very heart of the Patriot Act and everything that has followed from it.

The principle I am referring to may be stated in several ways; here are a few, since different versions may help different individuals understand what I'm getting at:

1.The possibility of pain creates the necessity of intrusion into everyone's personal affairs.
2. We should relinquish some important liberties in order to protect people from threats.

3. We are more afraid of random attacks than of pervasive surveillance.
4. The courage to endure potential hardships is less important than the prudence to prevent them, regardless of the means required to do so.
5. Privacy is only a relative good; security is an absolute.

Now, in my opinion, not a single form of this principle is justified, either in theory or in the historical practice of most nations. No one has ever tried to found or run a nation on such principles, nor would they be in any way admirable if they did. It is hard to imagine anyone mounting a demonstration or starting a petition to demand of the government that they collect or survey personal data on essentially everyone in the nation in order to deter threats to their security. There may be people who live in such fear of random violence - who vividly imagine themselves or their children as the victims of each new terrorist bombing - and would be willing to abide a principle of this sort. This is especially the case after the media barrage of worldwide reporting on actual and planned terrorist attacks, which gives the appearance of pervasive, present danger to what are in fact rare and widely separated events. But it is hardly surprising that a distorted epistemological position can lead to the distortion of a natural instinct, consistently embodied in democratic institutions over the course of many centuries, that resist the general substitution of security for privacy and personal freedom.

Still, even if people harbor such fears, they almost certainly do so with some misgivings. The person who is willing to stand on a street corner and rally people to the cause of government surveillance is a highly unusual and not particularly admirable exception, for their stance is one of promoting fear and weakness and diminishing personal  integrity in the name of protection. Whereas the one who is willing to rail against such surveillance, even in light of our intermittent suffering at the hands of terrorists, is admirable for their courage and defense of individual freedom, and willingness to endure hardship for the sake of higher human goals. Dignity does not consist in prostrating ourselves before the godfather of information, be it Stalin or Ashcroft or Obama; it consists in having the courage to face the fact that freedom sometimes entails sacrifices of personal safety and comfort.

Obama's reference to the intelligence reports he receives every morning is close to an explicit adoption of the distorted epistemological position I just described, and his defense of a wide variety of government surveillance programs panders to exactly the kinds of fears associated with it. This, in my opinion, is not leadership, but mere political pragmatism; not necessarily the demagogic kind of pragmatism that validates widespread, base instincts, but the kind in which politicians would rather ride roughshod over liberties and progressive values than face a day in which fingers might be pointed at them for not doing enough to avert violence. This is the way of the world. It is, for example, the reason NYC Mayor Bill DeBlasio will not really end the stop-and-frisk practices of the NYPD, but rather tone them down enough to pacify critics, especially those who associate them with racial profiling.

It is also the position of an executive who, having once pictured himself as the leader of a people, now pictures himself as somehow obliged to commend and uphold the patriotic instincts of the security forces whose one-sided perspective actually serves to harm the more fundamental interests of the nation they are trying to protect. "I'm Commander-in-Chief (Mayor, Governor, whatever) so now I have to let my lieutenants and foot soldiers know how much I appreciate all they've done." On this logic Obama also refused to pursue and punish those who were responsible for kidnapping and waterboarding. Applied to the NSA and surveillance in general, he is moved to exude flowery rhetoric in reminding us how much these people have our interests at heart, and assuring us that they won't abuse the information they're privvy to. But there is a subliminal message here too: that the situation is not really the man in charge showing gratitude for the support and performance of his staff, but the man who now has this overblown security apparatus around him and is afraid to cross them. Oh gentle souls of the NSA, CIA, USAID and the like, do not be offended, the President loves you, he just thinks you might ask in a nicer way for access to every piece of personal information on the planet. So that is the new norm: we must ask nicely for the security forces not to do everything they might conceivably want to do, because in point of fact, they run us, we do not run them.

This is why, sadly, 9/11 was a resounding victory for He Who Shall Not Be Named, not only in the short term loss of life and financial meltdown, but in the long term victory of totalitarian values over democratic ideals. Bin Laden (oops) could not have had a more complete success; he managed not only to cause enormous physical destruction but to unbalance the entire system of U.S. democracy. Surely this was the greater part of his goal - to bring Western civilization more in line with the harsh and neo-fascistic reality of life under extremist versions of Sharia law. By reorienting our leaders towards a defensive principle like the one mentioned above, he persuaded us that democracy has a much lower moral value than we believed; while government, ordained by no one to carry out its massive intrusion into our daily lives, operates no longer in the name of general or individual freedom but as a paternalistic overseer of our email messages, phone conversations, shopping habits and travel plans.

As for the individuals who take it upon themselves to release the details of this surveillance to the world, god help us if they have to first take classes at the Amy Vanderbilt school to learn the proper methods of revealing the dirt under the rug of our national security apparatus. These individuals are not always motivated by the highest ideals or guided by the common sense in the first place, so the notion that we can fine tune their actions is a bit ridiculous anyway. But to suggest that they should first read through and redact documents one by one so as to make sure no one gets hurt is like saying that a man who runs into a burning house with a water bucket should calculate how to toss it so as to avoid too much ancillary water damage. It stands to reason that what was released by Snowden and Manning is not even the tip of the iceberg, more like a snow cone on Pluto, compared with what we don't know about the U.S. espionage that is conducted in our name. Obama may be right about the hypocrisy of nations who criticize our espionage efforts while conducting their own. Then again, Joe McCarthy was right that Communism was an evil system. Neither Obama nor McCarthy add anything to their own ethical standing by pointing to the failings of others. So the German leaders are spying on our leaders? Goodness, what are they going to discover - that we are spying on them too? Then if we stop spying on them they will discover nothing, except maybe Michelle's secret recipe for lasagna or something equally exciting. Or perhaps that we are strong enough to survive external threats without such spying, that we have little to hide, and that we are an "open society" after all.

Nobody in their right mind believes that Muslim extremists around the globe can do anything more to us than cause occasional loss of life on a scale that is very minor as world conflicts go. All the terrorist attacks that have ever been carried out or planned in the U.S. do not add up to the annual loss of life or injury in automobile accidents. The astute observer will note that government surveillance of automobile travel has greatly expanded as well; one form of surveillance normalizes all other forms, an added negative effect of the runaway programs that are nominally directed only at terrorists. Automobile surveillance is if anything more pernicious than the other forms, not only because it involves tracking the personal movements of individuals who are suspectd of no crime, but because the red light cameras, speed cameras, toll cameras and alternate side parking cameras (they are now mounted on streetcleaning vehicles in some areas) mete out automated punishments for victimless crimes, while not clearly demonstrating any advantage whatsoever over human security forces (leading to the suspicion that they are really just more successful fundraising vehicles for local governments, the morality or accident toll of running red lights having little to do with it). Yet we tolerate even this; and indeed, from Montgomery County, MD, where speed cameras have proliferated like kudzu, to NYC, where the new Mayor has been a vocal proponent of this form of surveillance, putative liberal administrations have rammed these surveillance systems down the throats of the local population. Who among us will stand up and say, "Thank you, Mayor DeBlasio, for protecting these innocent pedestrians by tracking my movements with cameras everywhere I go?" About as many who are willing to say, "Please check my emails to prevent someone somewhere from carrying out some terrorist act". But this is what happens when a single terrorist act is allowed to undermine the moral fabric of a whole society.

Obama's speech is laced with suggestions that without these ubiquitous surveillance measures, terrorists will be popping up around every corner; moreover, that there is no other way to prevent them. In fact, all these measures did not prevent the Boston bombing; they did not prevent people from boarding planes with explosives; they did not prevent someone from trying to detonate a van laden with explosives in Times Square. So what is needed - even more surveillance? Or is it perhaps that surveillance of the kind that is being conducted is not the right remedy in the first place?

A long time before 9/11 I wondered: why aren't the cabins of commercial airlines secured? What if someone, perhaps just a distraught or crazy person with a weapon, barged in and caused enough mayhem to crash the plane? If the cabins had been secured, 9/11 would not have happened. The maniacs could have killed many people on board but they could not have crashed the plane and certainly not directed it to a particular target. A relatively simple screening operation could have prevented the Boston attacks; we don't like these bag checks at concert halls and baseball stadiums much, but they are far from being objectionable in the way that gathering of hundreds of millions of email messages is objectionable. There are many kinds of measures that have not been put in place, but they would cost money. Every truck or van that crosses a heavily used bridge or tunnel could be scanned - the technology is there, and it is minimally invasive to the lives of individuals. But it would cost money. Cheaper to secretly gather data from Google servers and install spy programs on computers of unknown people. No one who opposes the surveillance measures that Obama refuses to back down from need accept the charge of being soft on terrorism.

But we all do need to reject the logic of surveillance; we need to have the courage to realize that preserving democratic values may entail risks and hardships, at least until the forces who are violently opposed to democracy have been sufficiently dispatched. People have sacrificed their lives for those values again and again - that is the real lesson of the American Revolution, not the dubious reference to "surveillance" by Paul Revere and his buddies. When that dream dies, so does the nation that was built on it. Let's hope my title is wrong, and America is only sleeping. But the fact that the shining hopes of liberalism are leading the charge to undermine these democratic values is a very troubling sign.

Thursday, October 7, 2010

Fincher's Facebook

As a director, no one can accuse David Fincher of running away from disagreeable things. Serial killers and bare-knuckle fistfights have been his bread and butter for the most part. Why he decided to make The Social Network, though, is beyond me. A dorm room drama with a side of financial hooha, the film does not have a point, does not know if it has a hero or only villains, and does nothing to explore the really deep issues that it dances around. It is less a statement on the nature of these times than a product of the general confusion that reigns in the great valley of ethics and technology.

I am not here (on a technology and ethics blog) to talk about great performances, for which it may be acknowledged that at least Jesse Eisenberg's Mark Zuckerberg and Justin Timberlake's Sean Parker constitute a couple of decent contenders.  It has long (long, long...) been the case that Hollywood's pool of acting talent far exceeds its suppy of cinematic vehicles worthy of them. Nor do I wish to deny that the film manages, albeit it a rather formulaic way, to make a dramatically captivating tale of personal betrayals and egomania out of what you might call the classic story of the classic nerd. I suppose I could bite halfway into the summary on Rotten Tomatoes, "Impeccably scripted, beautifully directed, and filled with fine performances, The Social Network is a riveting, ambitious example of modern filmmaking at its finest." The first half, I suppose - though after it was over, my partner and I both kind of wondered aloud why anyone would really care about the story once you knew how it unfolds. As for "modern filmmaking at its finest", I hope not. Modern filmmaking at its finest ought to explore philosophical and social issues in a profound and aesthetically probing way; it ought to be art, specifically. The various comparisons to Citizen Kane that appear explicitly or otherwise in the capsule quotes on RottenTomatoes.com seem to me particularly inappropriate. That film works on so many levels it's hard to count; utilizes revolutionary cinematic techniques; and more than anything, probes its subject so deeply you can feel the nerves jangling and smell its breath. By comparison, The Social Network is nothing more than a stock Hollywood drama with a slightly unusual slant.

Why ain't this art? For one thing, the status of Mark Zuckerberg throughout the film seems to gyrate between admirable genius and objectionable "asshole" - a word used twice to describe him, and which seems to bookend the film. (That such language is startling enough to stand out in this way gives you an idea just what thin soup we are dealing with here.) Is Zuckerberg a technical genius with a keen sense of social needs and trends, or merely a hacker who managed to steal an idea from other Harvard students and through a combination of luck and deceit rode it to fame and fortune? The film version of Zuckerberg is quick with bright-sounding comeback quips, arrogant and consdescending to just about everyone around him. Legal proceedings to him are opportunities to stare out the window; a hurt and angry friend is just another body in the room. Yet, Fincher manages to make him just this side of cute and lovable too, largely on the strength of his abiding confidence in his own mission, and misguided but recognizably vulnerable appeals to people who he has hurt. The man emerges from the film as neither a really good dude, nor a really bad dude, just a somewhat confused kid who grew up too fast. Whether that is adequate subject matter for a film is a matter of taste, I guess, but for me, the character is neither tragic nor heroic nor evil, just morally and spiritually confused

Apart from the uninspired treatment of Zuckerberg's character, the film raises issues in ethics and technology, but does virtually nothing to move the debate forward, or even deepen the issues for our contemplation. Obviously, the moving idea behind it is the question of intellectual property rights. The Winklevosss brothers who sued Zuckerberg over Facebook and received a $20 million settlement are given plenty of screen time, but the matter of what constitutes an original idea, or whether an idea in itself constitutes intellectual property, or what material evidence is relevant to the question of who had an idea first, or what constitutes "theft" of creative ideas, or what constitutes deception with regard to informal working relationships - all this is somehow alluded to, but at best superficially explored. This is in spite of the fact that the film partly hinges around a hearing in which evidence is presented for and against such claims. These scenes largely involve a personal standoff between Zuckerberg and one or more of the examining attorneys; there is very little to make you think about the issues involved, which are among the most difficult and serious issues before us as a society.

This is disappointing for anyone who is looking to find insights, or even a particularly incisive or perspicuous presentation of issues, here. Fincher has shown some depth in exploring the psychology of the serial killer in Seven (which I have seen) and Zodiac (which I have not seen but read enough about). One might have expected that this entry into the subject matter of technology and its relationship to society would have produced similar depth. Instead, we get a gooey admixture of romantic and financial subplots and submerged philosophical issues. Zuckerberg is heard averring that the Winklevoss brothers are merely pretentious incompetents while he himself is the creative genius. But the question as to what that means, or what would constitute a reason for making that judgment, is left hanging throughout the film. There is even a suggestion that he developed Facebook while pretending to work for them as a matter of personal revenge for a perceived insult, thereby distracting attention from the potentially much more interesting issue of how society might navigate the competing claims of creative initiative.

The superficial approach to ethical issues is further emphasized by the portrayal of Sean Parker, a Napster founder played by Justin Timberlake. The whole Napster episode and file-sharing in general raises issues big enough for a separate film, and one might have expected that the prominent role given to Parker in this film would bring the moral issues under the surface bubbling up. But far from it. Parker is identified early on as a drug abuser and manipulative egomaniac, and his 2006 arrest for cocaine possession is duly played out, while the whole question of a Napster/Facebook connection - perhaps it represents some unstable emulsion of technical genius and moral decay among "Generation Y" (or Z, or on beyond Zebra) - is barely given a nod. The only impact Parker's arrest has on Facebook is to turn Zuckerberg's face glum and send him to his own site to "friend" an ex-girlfriend. The whole second half of the film is a missed opportunity to say something interesting on this subject.

Privacy? Anybody heard of any privacy issues regarding Facebook, or social networking in general? It is true that the film presents as a kickoff issue Zuckerberg's zeal to create a site on which he can embarrass the girl who has just dumped him. But to the extent that this is explored, it is mainly as a facet of Zuckerberg's juvenile emotional structure. It is a bit of character-building, and has little to contribute to our understanding of privacy or abuse in social networking. Zuckerberg is morally demeaned by his assinine motivations, but the fact that the enterprise facilitates such abuses, and the many ways in which sites like it contribute to the violation of privacy, is a subject that is shoved far under the rug as the more Hollywood-friendly personal dramas and corporate clashes are unfolded. Speaking of boardroom and bedroom, what about the embarrassments people have faced after sharing things on Facebook that ended up being read out to them in a job interview, or by a professor, or a potential lover? Don't expect to see much about that in the film that proudly proclaims its subject matter as "the social network". Ditto the burning issue of anonymity and identity assumption on social netowkring sites. Allusions and over-the-shoulder glances at these sorts of questions are all Fincher can spare while servicing the narrative requirements of an audience trained to latch onto one-line putdowns, predictable betrayals and obvious moral contradictions.

How about the fact that some people, not least of all high school and college age kids who would do much better to study their calculus and chemistry texts, are all but addicted to Facebook, spending hours a day on it? That is apparently not good movie material either. And if quite a lot of the 500 million-plus members think the whole thing is boring, stupid, intrusive, and a waste of time; if many of them basically joined Facebook just to see what all the fuss is about; if millions of Facebook members have chatted with an old buddy or flame a couple of times, acknowledged a few "friend" requests from people they later realized they didn't really want to be "friends" with, and then forgot about the whole thing - does FB still seem like such a great idea? Well, there's no questioning along those lines in the film; it would take the punch out of the presentation of FB as a phenomenon worthy of big-budget cinema. The denouement of the film starts with Facebook passing the 1,000,000 member mark, as if that in itself is a terrific thing. What if it is a deplorable thing? You won't find much in this film to detract from the sense that the Facebook juggernaut is ultimately something we should  drop our jaws and admire, after all is said and done and the lawsuits are behind us. Zuck may be a jerk, it seems to say, but "you don't get to 500 million friends without making a few enemies". And by the time the closing sequence has rolled by you know that once you have 500 million friends you can always pay off those enemies and perhaps make yourself feel okay about the whole thing. Maybe they will feel okay too.

As for the generation that has largely made Facebook the phenomenon it is, they come off as vapid, party-going types who got o fancy schools, drink beer, have stripper parties, suck up to the rich and famous, and admire people for all the wrong reasons. No doubt a lot of them are just like that; but so are a lot of every other generation. What, then, is special or unique about the generation that grew up in the digital age, indeed the HTTP age? I didn't see much in the film that asks a question remotely so interesting. Perhaps unwittingly (though I doubt it) the film manages to emphasize the childishness of the millenials by casting Brenda Song, the cute but infinitely spoiled and mindlessly materialistic London from Disney Channel's "Suite Life of Zach and Cody", as one of the Facebook team's groupies. (London, I mean Brenda, giving blow jobs in a bathroom stall? Nawwww...!) But this stereotype needs to be examined, not just trotted out to fund the thrilling story of Zuckerberg's rise to prominence on the social mania of his peers.

It is duly noted in the closing sequence that Zuckerberg is the youngest billionaire in the world. Okay, I'm jealous - well, of the first $10 million or so, anyway; I've never been able to imagine what I would do with even $25 million, much less a billion. Does the film examine the unique social phenomenon of very young computer types who manage to come up with a hot new web site and either take it public or sell it for millions of dollars? Not really: Zuckerberg's wealth and his relationship to it are not really explored; he is depicted as an idealist who does not  really care about money all that much, and is just in it for the fun and perhaps some prestige. Maybe that is true of the real life Zuckerberg, but still it is yet another alternative theme that is not really developed.

To be absolutely fair, I must admit that the film is inspiring in a certain way; it triggers a kind of slap-happy feeling that you, too, could come up with a brilliant idea that would match some untapped social sentiment and make you rich - only you would do it with a much more upstanding character, not have to screw your friends along the way, and perhaps use the money to save the world. But this only shows that the model of a brilliant Harvard undergrad leading a quasi-Bohemian life while rising to fame (or notoriety) and fortune is very seductive, a life that anyone with a mission and a sense of self-worth might wish he had led and might possibly believe he could still lead. I am not so sure I want to put such seduction in the positive column when scoring a film.

I must say I regret having to be such a curmudgeon. It is not that this film is worse than a million other Hollywood releases; it's better, in fact, than quite a few. It's just that the billing and the reviews do not really match what is achieved. When all is said and done, the film leaves you thinking a lot more about a slightly troubled individual than about the enormous social questions raised by his venture. It just doesn't have much to say about issues on which we are all waiting for helpful insights of any sort. But the will to say nothing is pretty strong in the arts these days, so maybe this is after all, if perversely, the movie of the decade.

Sunday, May 10, 2009

Clueless in Seattle (.Gov)

Paul McDougall points out in An InformationWeek blog on Thursday that a data security letter published on Seattle.gov, the web site for the city of Seattle, published content that is word for word identical to other new releases. One of these is an article published by InformationWeek writer Kelley Jackson Higgins. Googling the first sentence of the article produced a couple of other sites with identical sentences, though I could not actually find the article when I went to those pages.

It is particularly odious when a government entity (local or otherwise) indulges in behavior that even gives the appearance of plagiarism, for it undermines values that it should help promote, and justifies unethical practices when it should be serving as the gatekeeper. That said, and granted it is a fine distinction, but the publication of previously written content on a web site is not, by itself, plagiarism. That is because it does not attribute the content to someone who did not originate it. In many cases, such as the one McDougall points to (at least originally), the site simply fails to attribute the content at all. We can argue about whether it is implied, by virtue of publication, that the content is original. That is a difficult case to make out, because (1) on many web sites, most of the content is not attributed to anyone; (2) "publication" is a term of art, here, because there is normally no peer review and often very little in the way of oversight, or even editing, of the content; (3) the intent to deceive, which is a normal component of most acts of plagiarism, does not clearly exist; and (4) the expectation of profit, recognition, or other benefits that usually motivates plagiarism is not clearly present. So I don't know if McDougall's epithet "blatantly plagiarized" really fits the situation. We should be a little circumspect before lumping cases of anonymous web posts together with those of the student who hands in a plagiarized paper with his name on it as fulfillment of a course assignment, or the author whose name appears on a book or article in which content written by someone else appears without attribution.

The kind of practice McDougall points to is so prevalent on the Internet that picking out one instance almost misses the point. He opens his article by pointing to "news aggregators", sites which provide brief summaries, quotes and/or links to sites which originate content. There are an endless number of such sites, at least if you count them liberally; even such recognized sites as Slate and The Huffington Post could be made out to be part of the problem. Just how bad it is depends on how much unoriginal content is required before the use of it becomes objectionable. But regardless of the precise extent, the prevalence of these practices is sometimes taken as evidence that they pose no problem. We may have a tendency to feel that anything that is very widely done is not worth objecting to. But that is more intellectual laziness than considered judgment. The problem goes way beyond the self-conscious aggregators, who depend in part on a broad interpretation of the concept of fair use. I've recently found the same health care information - exact same wording, as in the examples cited by McDougall - posted on not two, but maybe half a dozen different sites. In some cases, it is the reverse of the present situation: a government institution (such as the National Institute of Health) appears to have created content to which several for-profit web sites have helped themselves. On another occasion I found the entire text of Harry Frankfurt's essay "On Bullshit" published on a web site in Georgia (U.S.) that claims to be a news site; although it was attributed to Frankfurt and Princeton University Press, I strongly suspect it was used without permission, and could not conceivably fall under fair use principles.

McDougall is right to be incensed. But the problem is much bigger than Seattle; bigger than news "aggregators"; even bigger than the crop of Internet-age college students who sometimes brag that they have gone through college without ever having had to write an original essay. You can change individual behavior by calling people on the carpet; but you can't change a culture by making examples of a few people. Look at the baseball doping scandal, in the news once again now - it is, in fact, far beyond a problem with a few high-profile players, beyond baseball, beyond the U.S. It came to be accepted that this is the way it is in sports, and if you want to be a sissy and not take steroids then you can watch your stats suffer and your salary go down. Similarly, if you want to try to create your own content - e.g., as I think I'm doing right now - you can just deal with it when you find you can't post as often or get as many hits as some "aggregator" who simply takes sound bites that other people have written and makes that look like a great service to humanity.

The whole notion of original content is undermined by the increasing probability that any given content has already been copied from somewhere else without attribution, making it easy to justify copying from the copier without attribution. And who is supposed to be responsible for investigating whether any particular sentence on a web site is original?
Add to that the old adage that "information wants to be free" and you have all the makings of a general descent into postmodern culture where everything is recycled, origins make no difference, and creative incompetence is transformed into the virtue of making apposite choices of material to repeat.

All this is too heavy a burden to rest on the shoulders of some IT dude who ripped some data security articles and broadcasts and put them up as public notice on a non-profit web site. That person must have already soaked up so much of the free-flowing whiskey of repetition that s/he is drunk with possibilities as s/he surveys the limitless expanse of already published web content, and not without some justice counts it as a bonafide professional virtue to be able to sift through it all and post the most relevant items in little blue boxes on Seattle.Gov. The nature of this act is buried under so many layers of banality that some of the commentators on McDougall's blog even questioned why the whole thing was even worthy of a blog post! (What exactly does it take to be worthy of being a blog post these days? Some people actually take the "blog" idea seriously and post any old drivel on a daily basis just to chat in the public's ear.)

Others doubted the wisdom of the fact that in his final paragraph, McDougall has the chutzpah to mention that Microsoft, in the following context:
Microsoft has been at the forefront of efforts to combat piracy and intellectual property theft, but most of its efforts have focused on China and other developing markets. Note to Ballmer and Co.: Time to start looking in your own backyard!
True, as the commentator
, who posts as "NYSSA", pointed out, MS is in Redmond, not Seattle. As if Redmond had a reason to exist other than as a Seattle suburb until MS put it on the map. Surely it takes only a limited grasp of geography to see that it counts as MS's "own backyard". NYSSA then asks, " Is anything happening in Washington now to be tied to Microsoft?" True, MS is not responsible for every tree cut down by Weyerhauser. On the other hand, McDougall did not say that MS is responsible for the posting on Seattle.Gov. But NYSSA misses a much bigger point, though I'm not sure McDougall even wanted to go there (so I will). We are in the midst of a vast proliferation of digital content; a single sentence or an entire article can end up on dozens of web sites overnight, and the phenomenon extends even to official government sites. It is pervasive, and threatens us with a general cultural and moral decline, as the distinction between originality and regurgitation gets blurred, and the value of creative expression is lost. What is Microsoft's response to all this? What is Apple's response? What is Sony's response? Three little words, "We're losing money!" (Abbreviated "DRM".) The very people who brought us into the world where the virtual is the real and the real is the virtual have only one complaint: dirty no good rotten Chinese pirates and Russian hackers are threatening our profits and making it possible to not only copy literary content (about which we could not give two hoots, let Google worry about that one) but software, songs and movies! The nerve! Track them down, fine them (or their parents, if they happen to be under age) or force them to take jobs as data security consultants!

Not too long ago I had in my hands a resume from a consultant applying for a high level position managing a major IT project. One of the requirements for the job was the ability to write complex design and analysis documents. People were required to submit samples of their work. What I received from this consultant looked very professional; I was duly impressed. But just as a precaution, I grabbed a particularly well put sentence from his document and stuck it in my favorite search engine. Never guess what popped up... perhaps the latest Annual Report from the company where the consultant had just completed an assignment? This is why it is ever so appropriate that Harry Frankfurt's article should end up copied wholesale on some obscure web site, available free of charge to anyone who doesn't want to pay Amazon $9.95 for a hardcover reprint of On Bullshit ($7.96 for the Kindle edition, speaking of repetition - that would be, I think, the sixth licensed version of what is a philosophy article of ordinary length.) We are becoming a nation, no a world, of bullshitters, and the bullshit is hitting the fan. The proof of it is that when somebody like McDougall gets decently outraged at a shining example of the bullshit that is blowing in the wind, some people cannot even figure out what the hell his problem is. Here is the latest response to his blog post, by someone who identifies him/herself as "dorsai":

I'm not saying that ignoring copyright is OK, but this might have been a legitimate oversight by an overworked city worker, that appears to be fair use (at least it is now) based on the attribution links on the site.

Not quite sure why this warranted a blog post.
Okay, you're not sure, allow me to jump in. Go click on the link again, my friend, and you'll see an amazing thing: in the aftermath of McDougall's article, the aforementioned overworked city worker (I count myself as one of those, BTW - different city, though) has found the time to amend the article with attributions for each and every one of the security alerts he posted on the site! Now there's a reason to write a blog, by golly! Something that was a little off got corrected, the original writers received credit, the Seattle IT guy undoubtedly feels better about the whole thing, and the world is in general a better place! You know, if I could claim such accomplishments for most of my dozens of blog posts (not all on this blog) I think I would just throw a big party and invite everyone at Seattle.Gov and CMP too. Fortunately for my budget, I can't make such claims. But let's not get so jaded that we just look the other way or make excuses every time someone points out a mass instance of unattributed copying. The most encouraging thing about the whole incident is that the Seattle IT guy took the issue more seriously than his would-be defenders, and did something about it. Which suggests once again that if there is hope for a creative rebirth of humanity, it will probably come out of Seattle. Or at least the soundtrack will.

Saturday, February 28, 2009

February Crashes and Burns

February 1909 was not exactly a dull month for disasters. There was the wreck of the Penguin, a New Zealand ferry that struck a rock on February 12 and took down some 75 of the 105 passengers aboard. Then on the 16th, a mine explosion at the West Stanley Colliery in Durham, England, ended the lives of 168 people. The cause was never determined. The U.S. contributed to the carnage with several train wrecks, including one near Delmar, DE that took seven lives, and another in Carbodale, IL that sent five more to their graves. Overall, though, it was far from the worst month for catastrophes here, at least according to what I can glean from the not exactly uplifting information on GenDisasters.Com.

In fact, it seems to have had its moment of courageous victory over the forces of evil, as a horrible train wreck near Whitesboro, NY (three miles west of Utica), ended with the loss of only two lives, both railroad workers. The engine of a westbound train exploded just as it was passing the passenger cars of an eastbound train, throwing the entire eastbound train from the tracks, scattering cars into a field and leaving a long trail of debris. The wreck on the redeye apparently happened before daybreak, at temperatures of 15 below. Nevertheless, uninjured passengers on the westbound train, which merely stalled, rushed to the assistance of others, as townspeople also jumped out of bed to lend a hand. In the end, not a single passenger died; the injured were rushed to hospitals by sleigh and other means. Shades of a certain emergency landing on the Hudson River almost exactly 100 years later. Maybe there is some cosmic symmetry in history after all.

Unfortunately, if there is, it's not all of the kind to jump and cheer about.
There is a story that some time around the turn of the century there were only two cars in the state of Kansas, and they collided. Probably an apocryphal tale from Ripley's Believe it or Not, though repeated in popular culture often enough to make it seem real. That supposedly occurred on July 4, 1904. It now has an eerie parallel in a so-called "freak accident" in which the French nuclear-armed submarine Le Triomphant triumphantly rammed the British nuclear-armed sub H.M.S. Vanguard. The date of the sub crash was only given as "early February". The two submarines were playing hide-and-seek, the game in which these units armed with enough power to destroy much of the civilized world try to hide their whereabouts from one another's sonar. Disaster, thy name is Progress! A car wreck can injure a few people and spill some antifreeze (toxic to wildlife). Two nuclear subs can destroy an entire oceanic ecosystem with radiation leakage even if they don't accidentally discharge any nuclear-tipped flying objects and take out a city or two. Why are these things patrolling the oceans, jumping around and hiding from one another like some underwater Mario figure? As far as I can tell, were we to peer into the minds of many of our world leaders what we would find is a bunch of anxious little boys who are mentally primed to pull handles and push buttons, as if the enemies were virtual and the lives they endanger mere animations. Yet we believe we have no choice but to accept this sick state of the body politic.

The almost benign Whitesboro wreck has a less fortunate centenary sister to offset the more pleasant turnout on the Hudson, i.e., the fatal crash of Continental Flight 3407 near Buffalo, NY on February 12. What a comedown, after January's "miraculous" Hudson River landing. But actually, that miracle was no fortuitous gift of god. It depended on a plane engineered to land on water and float for a period of time, equipped with life preservers and a crew trained to react in case of an emergency water landing. It depended on a pilot who had to practice water landings in flight training until he could execute them about as confidently as a normal landing. It depended on communications between air traffic controllers and the flight crew, accurate enough (in spite of some errors) to let rescue teams know what was about to happen and where, before it actually happened. That is not to say that nothing could possibly have gone wrong - had the emergency occurred at night, or the river been laced with big chunks of floating ice, or one or another systems failed, there could have been a much worse result.
But this was no miracle, just an example of how technology can be controlled even in the most adverse circumstances when enough planning and careful execution goes into it. With Continental Flight 3407, it is possible that all reasonable measures were taken to de-ice the plane, and that there was no way for the crew to do more than guess where on the plane the additional ice buildup had occurred, leading them to take the wrong corrective action when an autopilot system sent the plane into a descent. But there have been ice-related crashes before, including an awful incident in which a de-iced US Air plane spent so long on the runway at LaGuardia while waiting for takeoff in a March 1992 snowstorm that it accumulated a new, dangerous sheet of ice on the wings, resulting in a crash and 22 deaths.

Birds, ice, lightning strikes, wind shear - we know the most common causes of plane crashes, but for some reason we have not managed to eliminate them. I guess you could ask why we are spending billions of dollars engineering nuclear submarines which can destroy life on earth, when the tremendous scientific and technical efforts and financial resources required to build such things could be put to use figuring out how to build ordinary commercial planes that don't inhale birds or get coated with sheets of ice in flight. That would presuppose that we elect rational leaders, driven by a sincere belief in the good of humanity, rather than overgrown Mortal Kombat fanatics, driven by infantile aggressive impulses. Well, who knows, maybe we'll get one yet.

February did have a kind of miracle, if you like those: the case of the Turkish Airlines plane crash in Amsterdam on February 25, where the vast majority of the passengers survived even as the fuselage split in three pieces on ground contact. That is just a miracle, no way to plan or predict it but plenty to be thankful for. But everything beyond that sort of lucky turnout has to be done by us humans. That's why it is so disconcerting when it seems that everything was, and lives were lost anyway. We have collectively accumulated millions of miles of flying experience in commercial flights, yet we still cannot control every factor that might cause a catastrophe. So why are we sending nuclear submarines to play cat-and-mouse games beneath the waves? (And whatever you do, don't say "it's the French and the British, not us" - the U.S. has at least 18 nuclear-armed Ohio class subs, and possibly several others, and all our subs are driven by nuclear reactors.) Two nuke subs accidentally playing bumper cars in the world's vast oceans? One good crack-up and it won't be ordinary ice we'll be trying to lose - more like ice-nine. And that doesn't come off too easily.

Yes, Astroland may have closed but bumper cars are still going strong. On February 10 (at least someplace on earth) two satellites, a defunct Russian gizmo and an Iridium cell phone relay unit, collided in space. Another freak accident, which must have particularly freaked out users of Iridium's international cell phone service; according to a quotation from an Iridium spokesperson on MSNBC, "This satellite loss may result in very limited service disruption in the form of brief, occasional outages." As for the Russian satellite, Dasvidania! The debris now threatens the International Space Station, but NASA claims the risk is "acceptable". It will of course eventually end up on earth. Since a good part of the earth's surface is water, desert, or cemeteries, there is not much to worry about.

Or is there? Well, there's all that satellite debris hurtling around, just waiting to burn a hole in the ISS, which can then in a freak accident come crashing down over... Houston? Oh but no, say the FAA, the U.S. Strategic Command, and the Texas State Police. No, that fireball that numerous witnesses from Waco to Austin saw a few days after the Iridium collision must have been something Apollo (the Greek god, I mean) tossed in our direction for fun. (He likes to play bumper cars too.) "We still think it's possible it might be a natural phenomenon...", a spokesperson for the USSC intoned. Which, correct me if I'm wrong, means almost the same as: "It is likely, though not absolutely certain, that this is a piece of space junk burning up just short of heavily populated areas in Texas." Or, as they might say when they make the movie - "Houston, you have a problem."


Friday, February 27, 2009

Google's Earth, or Is It Brin's World?

An InformationWeek post today by Thomas Claburn is entitled "Google Defuses Street View Privacy With User Photos". It's hard to think of a more misleading way to characterize the situation. Adding user-generated content cannot defuse any issues regarding the still available satellite or panoramic photos of people's homes, windows, front porches, pets and the like. Presenting that kind of content in a commercial setting, where it is detailed and the main subject of the photo, normally requires individual, signed photo releases. Google has managed to get around that, most recently in a flawed decision by a Pennsylvania court. You can get an idea of Google's view of physical privacy in general in their court brief, an excerpt of which was published by The Smoking Gun. Essentially, Google argues (a) that you have no reasonable expectation of privacy for your real property, unless you have taken extraordinary measures to protect it; (b) that the property in question can already be viewed from one perspective or another on publicly available web sites (such as that operated by the County Assessor's office); (c) that while Google photographers did briefly enter a private driveway to make the photograph, that driveway would ordinarily be accessed by many people, such as delivery persons and neighbors, and is not therefore "private" in the way needed to exclude photographs; and (d) that the defendants cannot reasonably claim to have suffered psychological harm from the publication of the photograph and did not take immediate steps to have it removed.

Personally, I think all these arguments are fallacious, because they all overlook two obvious and highly pertinent facts:

(1) Google set out deliberately, and without regard to the preferences of society as a whole or any individual on the planet, to take panaoramic "street view" photographs of every piece of real property in several major cities. This was an unprecedented and extremely ambitious venture, and cannot be compared to the casual, accidental or occasional intrusion by an individual photographer. That is not to say that an individual photographer who deliberately enters a private driveway to photograph someone's house is doing nothing wrong; but it is not comparable to the methodical nature of Google's "street view" program, which might be seen as deliberately and egregiously intrusive even if the wandering photo opportunist is not.

(2) Google's purpose in photographing private homes is purely commercial. Although Google self-righteously pontificates about its "mission" in the court statement ("to organize the world's information and make it universally accessible and useful"), the fact is that Google is a profit-making enterprise, and any ideological motives they may have has to take a back seat to the fact that they are publishing more and more content with the intention of making more and more money. No one asked them or designated them to carry out any "mission"; they do it in the hope of increasing advertising revenue, licensing fees and other sources of income. So when Google hires photographers to stop by your house and take panoramic pictures of it which they can post on the Internet, they are not acting as artists, journalists, casual snapshot takers or any other disinterested party. They are using your property to indirectly generate income for themselves, to which you are not entitled, though you are the owner of the property. They are not fulfilling any (alleged) security mission, as NASA or the Pentagon might be when they point their satellite at your block; if anything, they are providing potentially dangerous information which would otherwise be quite challenging for terrorists or rogue states to obtain. As I said, the deliberate commercial use of a detailed photograph in which your property is the main subject normally requires a signed photo release; but decisions like those of judge Amy Reynolds Hay of the U.S. District Court for Western Pennsylvania have helped Google to evade that requirement.

Aside from that, it is just obvious, and should be to any court, that previous publication of unlicensed photos of you or your property is no justification whatsoever for another instance of that to take place. The County Assessors office may actually have no right, or legitimate official interest, in putting your property on display. All the same, if they do, that does not mean that Google's pursuit of profit is an equally legitimate interest in that regard.

Now, back to the latest news bite: Google has not "defused" any privacy concerns with its addition of user photographs, obtained through its Panoramio service. Google is just helping itself to a bunch of free content, assuming vanity will convince enough amateur photographers to turn themselves into an unpaid, worldwide, roaming Google staff. There is some suggestion that the privacy concerns Claburn thinks this will address are more along the lines of the Google photographers' off-color photos of people picking their noses or peeing in public than the photos of people's private property, but privacy is invaded when a compromising photo is published, regardless who the photographer is. Contrary to Claburn's view, Google is not somehow exonnerated, nor are the victims less entitled to redress, because a compromising photo is selected and presented rather than originally taken by Google.

In fact, rather than fix anything, the new idea only raises questions about Google's use of user-generated content in search results that are part of Google's regular commercial activity. Google's Terms of Service seem to make it clear that they do not own the content you upload (see paragraphs 9.4 and 11). But their TOS are strangely similar to those of Facebook, whose latest ethics fiasco has much to do with their entitlement to uploaded content. Facebook claims the right to make "derivative" use of your content, while Google is a bit more vague. Google says that you grant them an "irrevocable" license to distribute, copy and present the work; they suggest that some rights you grant them to some content may be revoked, but only as specified under unspecified "Additional Terms of those Services". They say you may "terminate your legal agreement with Google" by closing your account and writing to them, but not that this revokes their right to your content. Personally, I would be very wary of uploading any photo to Google that you might ever want to either use commercially, or keep private. The pix of you and your girlfriend or boyfriend by the Statue of Liberty may seem like just about the most innocuous thing to put on Panoramio. Well, think ahead my friend. That irrevocable display license you granted to Google may just bite you in the rear end someday when you no longer want that particular association exposed to everyone on the planet.

My own view is that Google's so-called "mission" is on a collision course with basic values we have always had, and it is mainly the naivete and glib attitude of a certain technobrat, IT-snob crowd that makes it seem as if the values have just dispappeared. how convenient for Google, not to mention several other privacy-invading profiteers from surveillance and information-sharing technology. It is reassuring that so far, practically every incursion they make into privacy rights gets some kind of pushback; but power, money, and the shallow acceptance of every new social networking opportunity are pushing even harder on the other side. Which means privacy is constantly losing ground, and often court battles. Well, this blogger at least is not going gentle into that good night.

Sunday, January 25, 2009

Is Microsoft Doomed?

I admit I got a little frustrated after no one commented on my Dell post, but now I understand - I just have to post three times a day and carry my laptop with me to the bathroom and I might get some attention, like all those best-selling technology blogs that get high Technorati ratings. Or maybe I should write about nifty gadgets instead of ethics. Well, I don't think it's going to happen. So here I go again, after a brief 8 months or so, with my overly lengthy posts, stubbornly focusing on rights and wrongs in the digital world.

But my topic this time is as much historical as ethical. It is a matter of fact as well as a matter of justice. I guess that amounts to poetic justice, as in the giant who ate so much that he finally exploded. I am talking about that formidable giant Microsoft. I've been in IT since the days when MS's main products were GW Basic (officially "Gee Whiz Basic", I think, but everyone knew it was Gates, William). You could say, I suppose, that at that time they were a reasonable company with a few reasonably competitive products - MS COBOL was one of the earlier microcomputer COBOL products, and I think it worked on mainframe and midrange systems too, which gave it some advantages over Ryan McFarland and other products. Then they struck it rich by beating out CPM to make MS-DOS the OS for the IBM-PC, and made themselves a key player in the small computer market.

Still they were nothing like what they are today. MS-DOS was, by most accounts, not really an operating system, but an incomplete I/O manager. In any case, you needed half a dozen supplementary utilities (Norton, Sidekick, Xtree, etc.) to make it even minimally user friendly. The first couple of versions of Windows were complete non-starters; we ignored it and used DeskView for multitasking and window-switching. Early versions of Word had only one reason for existence, the ability to utilize built-in graphical capabilities in some of the early laser printers. People used WordPerfect for ordinary word processing, or XyWrite for more technical stuff, or maybe DisplayWrite if they were big IBM ideologues, or Wordstar 2000 if they wanted more power and more frustration (it had plenty of both).


It was Windows 3.1 that finally put MS over the top and created that great sucking sound whereby all new apps had to write to the Redmond API. Soon computer vendors were being forced into MS-only deals. Excel, a ripoff of (Lotus 1-2-3) a ripoff (Visicalc) of a software program, got bundled with Word, which was beginning to resemble a real word processor, and soon MS Office was making a play to be an industry standard, not without some strongarm tactics to help it along.

The rest of the story is pretty well-known: Word elbowing WordPerfect (a 10 times better and more intuitive word processor) out of the market, Foxpro (once purchased by MS) knocking dBASE IV and Clipper out of the water, NT blowing Novell away,
IE pushing Netscape to the brink of non-existence... all marketing, my friends, not a single one of these stories is an outright victory for quality. Though we did discover (in case IBM hadn't taught us earlier) that once you own the OS, and thereby pretty much control the industry, you can (a) become big enough to acquire a huge staff of high-end technical people and lure industry bigshots (b) buy up anybody who has an interesting technology, or simply steal it and fight them in court for 10 years with your huge legal staff, and make subtle improvements, and (c) bloat your products with features that 99% of your customers don't really need but that make for good sales pitches. I suppose some people might actually see this as a victory for quality. I see it as the way an unimaginative but unscrupulous corporation fought its way to become the Apollo of the computer industry, holding the sun in its chariot and determining the course of your day whether you like it or not.

If I had to pick one example to substantiate this it would certainly be Word vs. WordPerfect. WP 5.2 for Windows is still my word processor of choice, now nicely killed off by the elimination of the 16-bit subsystem in Vista. Granted, it is not a graphic design program or truly WYSIWYG. Later versions of WP are. But this one is simply the best writing program ever made. And I'm a writer, not a brochure designer. Word is a bloated and extremely frustrating piece of sw which I loathe more with every new version. The first thing I do - try to do - is turn off every single automatic correction and formatting and help feature - but it never works. They're hidden all over the place, like some little mouse hiding stashes of rotten food around its mouseholes. Because my organization only supports Word on the desktop, I have to live with MS's way of doing things, which I detest. Quality is not the name of this game; power is. But the superficial appearance of quality is there: more and more features rather than more user-friendly versions of the features they have - that is the MS mentality. You keep going in that direction and you get Word 2007, which sports a toolbar that might as well have been taken from the inscriptions on a pyramid. Nothing will stop the juggernaut of reinvention at M$ because it is what they live for, and by: you can sell the new look as a new product, even if its usability is even worse than the previous one.

So it is not without some sense of relief - call it poetic justice if you like - that I am able to write the headline above. "Some sense of wishful thinking", you might say. After all, in the last few days I have seen numerous pieces in the press alleging that Windows 7 is going to save MS from the ongoing commercial and publicity disaster known as Vista. A piece by David Pogue in the NY Times, several recent pieces in InformationWeek (this one, for example, or check out this alleged Intel endorsement) - they're pretty much all over the place. Windows 7 will save the day. What is the truth of this?

The truth is that MS has tolerated some rejections before, and has the same tried and true methodology for dealing with them: phase out support for older products so that users have no choice but to switch. And fear not, they have already announced the termination of support for Windows XP, the operating system that, despite all its flaws - absurd lack of registry control, endless security holes, occasional BSOD crashes, dubious handling of memory leaks and memory hogs (Outlook being a prime example of both) - was certainly their "greatest hit", especially with businesses who were looking for an improvement over NT Workstation.

It is typical of Microsoft, and of them alone, that a "hit" in the OS business, while supporting their stock price in the short term, is a kind of disaster for them in the long term. MS alone thinks of the OS as something you sell over and over again to the same computer user. For IBM and Sun, for example, when they were heavily invested in the proprietary the OS business, the OS was essentially something you buy with a new computer and then support with a maintenance contract. For MS, the OS is a huge revenue generator; it is supposed to be thrown out and replaced after a few service packs, to the tune of millions of licenses. (I will not get diverted into talking about their increasingly obnoxious licensing schemes, but keep in mind that they went from providing a full, re-installable copy of the OS on floppy disks, to copy-protected versions that would only re-install on the same computer - on a good day - to the current Vista "license" which is pre-installed, must be backed up by the user, and cannot be backed up or restored if a mistake is made in the backup process. Shameful, but typical.) So when one of those OS versions proves stable enough and flexible enough that businesses (especially) think to themselves, "You know, this OS really does just about everything I need it to do, I think I could save the money and an enormous distribution headache and just stick with it", it's a disaster for MS.

I wish more of the professional commentators, the ones who (unlike yours truly, so far at least) get paid for their opinions or make money from their blogs, would just get off the bus and say this in plain English. MS wants desperately for the OS to be obsolete, and it is a plain old catastrophe for them if the rest of the world thinks they actually did a good enough job with the last one that a few maintenance tweaks and a couple of utilities (you know, the much-ballyhooed support for new devices and all that) would carry it through day. This of course does not apply only to their OS; it applies to their Office suite. I mentioned the horribly reconfigured, visually confusing implementation of Word in Office 2007, all for no tangible gain whatsoever except to make it look like a new product so they can re-sell it to you. The same is true of most MS products. Every new version of MS Access caused us to have to rewrite a good portion of our Access apps, while keep the same extremely kludgey implementation of client-server. .Net framework 2.0 makes some things easier than 1.1 (database connectivity, for example), and provides a built-in membership class (which, without extensions, is good for only the simplest apps) but it is mainly there to sell Visual Studio upgrades. Every product MS makes is the same: the upgrade cycle is what makes or breaks the company. It is not consumer-oriented. Yes, consumers ask for this or that feature, computers and networks evolve, etc. A well constructed OS can go on for a long time being upgraded and patched to keep up with current standards. That model, however, ends up making very little money. MS never intended to follow the model where they provide basic OS functionality and sell service. They just want to resell the product over and over again, repackaged and made more nifty to suit current tastes.

That is, essentially, what Vista is. I am not talking about code-level Vista vs. XP vs. Windows 7. You can debate that all day. I am talking about functionality, performance, compatibility. It is not a new OS, except in the sense that trading this flaw for that flaw while having essentially the same functionality counts as something new. It is a new interface; it allegedly still has a lot of NT code in the kernel; it rewrites the I/O features (e.g., how it handles audio files) but not in a way that universally improves performance. See the Slashdot debate referenced above regarding the audio delivery system; it sounds extremely sophisticated, but here I am writing this post using Vista Home Premium 32-bit with 2GB RAM, listening to the Dandy Warhols on Lala.Com, and I get audio dropouts. As I did last night playing another album on Lala. For that I need to buy a new OS?

As for performance, also extremely debatable, it depends so much on the total configuration of hw and sw and drivers and what tweaks you do or don't apply. As for compatibility, Vista is totally incompatible with all 16-bit apps, does not support a lot of 32-bit apps, denies rights to a lot of drivers, etc. The one area in which it actually improves things is in the area of security, but at what cost? For example, the driver-signing rules are supposedly there to prevent rootkits from installing, and the numerous required user-acknowledgment dialogues are to keep viruses and trojans from loading. 'Scuse me, but I have anti-virus and anti-spyware sw installed, and I didn't pay for them. So how many of those millions of PC's with the "downadup" worm have Vista installed? A few? Oh, is that because with all those annoying dialogues, most users haven't got the slightest idea which processes should be allowed and which should be denied? Please take a letter: "Dear Steve/Bill/Whoever'sListening: the same unsophisticated users you talk down to by putting in dozens of auto-correct (or as I prefer to call them "auto-annoy") features in Word are the ones who are going to be responding to millions of those dialogues. How awkward is that?"

I cannot go on and on here trying to analyze and compare every feature of Vista, W7 and XP, and it is kind of beside the point. The OS rewrite, if it helps anyone, helps people for whom the main purpose of a computer is to play games. With all due respect to the Wii and Guitar Hero crowd, and the flight simulators who are now busily practicing their water landings, that should not be what drives the PC OS. Not in the least. Not even the consumer OS.

Oh, I hear you: without Vista, no 64-bit support. But businesses, by and large, did not need, and do not need, 64-bit quad-core processors or office apps rewritten to take advantage of them. More likely a lot of them could make do with thin clients, but I don't want to suggest that I am promoting that model either. Setting aside what MS itself is driving with useless bloated rewrites of their own sw, a standard Pentium processor is good enough to support the functional requirements of most business users, including most programmers. Yes, you have your CAD users and all, but they long ago started using workstations dedicated to that kind of calculation-intensive computing. And the same goes for most home users. I use my PC to write, maintain my finances, get to the Net, and a few other things. Don't need MS-generated hassles by having to deal with a new OS every couple of years. In my time I have ended up with Windows 3.1, WFW 3.11, Windows 95, Windows 98 2e, Windows ME, Windows XP (a couple of different versions) and now Vista. And I do not consider myself a frequent computer buyer, I do not get a new system every couple of years. Still, I have rarely had the same OS twice. This greatly affects my productivity, on the downside. The only significant change I have seen in the OS in all that time is plug-and-play, the ability of the OS to recognize devices automatically, which was a great improvement over the old manual driver installation model. The USB 2.0 spec was an incremental advance on that. Otherwise, much of the same thing for about 15 years. Supports a browser now, supported a browser then. Addresses more memory, true, but are you telling me that you can't write an OS to address more RAM than is currently the standard? That is patently false. There is very little new in the OS since Windows 3.1 implemented a decent method of multitasking. WOUM: Write Once, Upgrade Many. That should be the model, and actually is in a way, if you look at the underlying code in the kernel, but MS wants to make it seem like you are getting a new product and you must buy a copy of it for all your computers and probably buy new computers for it too.

Windows XP was not, in spite of what they say, a great OS. The taskbar is extremely problematic and gets corrupted easily, causing huge headaches. The registry fills up over time with thousands of unreferenced entries, slowing down or crashing the system, and then you can take your life in your hands using some registry cleaner and hope you don't get rid of something you need. BSOD crashes are too common. Errant drivers can crash the system on bootup. Security was pitiful until SP2, and merely bad afterward. There are a lot of things you could say about XP that would recommend some improvements. But they do not need to be much more than XP without the bad features. Don't tell me you have to totally rewrite an OS to keep the registry clean, much less to deliver streaming audio so that it (allegedly) hits both your ears at the same time! That is bullshit, the kind of BS that lets MS sell millions of copies of each new version of Windows.

This blog is about ethics, not technical arguments over the virtues of 32-bit vs. 64-bit or this audio codec vs. that one. But here we have a case where the putative technical virtues of the OS are supposed to justify forcing the entire computer world into a huge project involving billions of dollars, millions of man-hours of installation, and millions more dollars and hours in software and hardware upgrades and application rewrites, and the supposed technical virtues are either not there, or they are gratuitous for most users, or they could have been done in a much less costly way. What right does one company have to create this sort of havoc for the sake of debatable or ephemeral improvements? It is absolutely an unethical business practice to change a widely used and understood interface and I/O standard for the purpose of supporting your own bottom line. It is also unethical to conclude deals with every single major retailer - Circuit City (RIP), Best Buy, Staples, Office Max, etc. - to carry only systems with an OS that nobody really wants. (I asked both CC and BB to sell me a system with XP installed and both of them refused.) MS's whole concept of how to do business is unethical, which is one reason they have ended up in court so often on restraint-of-trade charges (or something very similar). It starts with strongarm tactics with smaller sw companies, PC vendors and retailers. It extends to the development and marketing of their OS and all their products. It should be recognized for what it is by journalists and bloggers. That's what objectivity is, calling a spade a spade.

Now that they have suffered their most embarrassing defeat in the marketplace with Vista, and lost further ground to Linux and Apple on the desktop, what is their answer? They will push even harder for W7, cut the XP support lines, and pretend Vista never happened. And unfortunately, a lot of the tech-savvy journalists and bloggers are going along with this. That has an enormous influence on the industry, as corporate buyers read the buzz and think they have no choice and might actually gain something by moving to W7. But will that be enough? I have my doubts. W7 is scheduled for release early next year, and some analysts are suggesting that MS may move up the release date to save their necks. Meanwhile, we are in the midst of one of the worst financial crises in the history of western civilization. Nothing is stabilizing at this point, not housing, not banking, not technology (bye-bye Nortel, you did give it the old college try, didn't you? is Sun next in line for closeout sales?), not employment, and certainly not government, which will soon have committed (by my count) more than a trillion and a half dollars in one form or another to pulling us up by the bootstraps. So is corporate America, and faltering economies worldwide, going to sign up, in the next year or two, for a massive, expensive upgrade to a new desktop OS that is still not really necessary? And what if they don't?

There is indeed a serious question whether they will have a choice. Nothing strikes fear into the heart of corporate system administrators more than the idea of using "unsupported" sw/hw. So MS will make good on the threat to desupport XP, and it doesn't matter if the company has not called MS in three years for support - no more security patches, no more compatibility packs for new devices or hw interface standards - OMG, can't live with that, gotta bite the bullet and go with W7. But what if there is finally a critical mass of companies who decide to turn to open source, SaaS (Software as a Service) or "cloud computing", i.e., hosted apps on a remote Internet site, and other such options? And what if in response to this the level of support for coprorate use of these technologies grows? What if more users conclude that they don't need a PC OS at all, and can make do with tablets and other devices? In case you think MS is not the least bit concerned, note for example the pre-emption attempt with the "Tablet PC" feature in Vista, under Accessories. MS has been offering a thin client OS for something like 12 years (Windows CE) and is now positioning the little-known XP Embedded for service on older PC's that can't run Vista or W7. They are increasingly doing the open-source shuffle without actually committing to much.

Nevertheless, I'm not here to tell you that Linux desktops, thin clients, Android devices, OpenOffice and Joombla/PHP/Perl/etc. are your future. I don't think anyone can predict the future. But as the saying goes, the harder they come, the harder they fall, one and all. And this giant is due for a comeuppance the size of Mt. Rainier. You know, that's the active volcano which tops off at 14,411 feet above sea level, and which according to Google maps is a 99-mile drive from Redmond. A little less as the crow flies. That's a lotta Java, I mean lava. Come to think of it, Pompeii was very important at one time too.

Saturday, May 17, 2008

Dell Financial Services: Encore?

With revisions, 5/17/08 10:30 a.m.

The very day I posted my previous note about Dell Financial Services and their unscrupulous collection practices, I picked up a copy of the previous Thursday's (4/24) NY Times and found this article on debt collection outsourcing to India, focusing in particular on Encore Capital Group, the very company that services Dell and is apparently responsible for most of those junk calls I've received. The article, unfortunately, is essentially an apology for these vipers. The author, Heather Timmons, paints debt collection outsourcing as some sort of growth industry. She apparently thinks it's a wonderful thing that these folks have "
a sophisticated automated system that dials tens of thousands of Americans every hour". You get the impression that she is smiling as she depicts the lengths they go to track down their victims. J. Brandon Black, who is identified as the CEO of Encore, is quoted uncritically, as in the following:
Although the stereotype of a collector may be “some guy with chains and a cut-off shirt,” Mr. Black said, collectors in India are “very polite, very respectful, and they don’t raise their voice.” He added, “People respond to that.”
Another promoter of this scurrilous business tells us how lovely it is to be chased down by an Indian collector:
In the past, the prevailing wisdom about wringing money from late payers has been “if you’re calling the Midwest, you want someone from the Midwest to twist their arm,” said Mark Hughes, an analyst with Sun Trust Robinson Humphrey who covers the industry. That theory is changing as the pool of trained phone professionals in India and other locations deepens, and companies look outside the United States for lower costs.
This is definitely one of the most uncritical and least-researched articles I've seen in the Times with regard to something that is obviously an ethically questionable business practice. Ms. Timmons tries to paint it as a fascinating cultural and business phenomenon and does not want to sully the reputation of the industry by bothering to look into the numerous complaints about them that can be found on the web (see my previous post).

Ms. Timmons states that Indian debt collectors are trained in qualities like "sympathy"; but no example of such sentiments is provided. Instead, we find examples of how they handle "abusive" clients by telling them, "This attitude is not going to get you anywhere." This exactly mirrors my own experience with these agents. When I answered the phone and was asked for a person who does not live with me, I told them that, and added that they had better stop calling me or I would take action. "This will not help" came the reply; and indeed, it did not - only the agency was the abusive party, not me. The calls continued, hundreds of them, day and night, even though I had given them all they had a right to know: the person they were looking for is not at my number.

Talk about a great sucking sound - the article documents how the offshore collection agents have been trained to use the U.S. tax rebate, which is meant to jumpstart the ailing U.S. economy, as a weapon against recalcitrant debtors. "This gives you an advantage so you can increase your wallet share", a collection team leader says. Bye-bye, stimulus, hello trade deficit. These companies are, as I suggested, paid by commission. Encore also "files sheaves of lawsuits against customers who do not respond".

All this is very disturbing material is neither is not not even delivered by Ms. Timmons in a tone that is cleared biased. For example, she manages interviews a U.S. debt collector who states some specific problems with overseas debt collectors, like the added annoyance of having someone who does not speak understandable English harassing you for money. But the same person is then alleged to have said that he "had not run into any specific problems with overseas debt collectors" and that they "are very well spoken". Can anyone believe that this was not coaxed for the sake of legitimizing the offshore vampire approach to debt collection? What sort of reporting this is, I don't know; I'm a mere blogger, unskilled in the journalistic ways of
Times correspondents.

One of "Encore's best collectors", a gum-chewing 27-year-old, is said to be one who "wheedles work and cellphone numbers out of debtors' relatives to track them down". If this isn't enough to make your skin crawl, you have to read the article and get a load of the self-congratulatory spirit of these collection teams, which operate like sales forces and applaud each other when they elicit a commitment - relishing the thought that someone halfway around the world who can't pay their bills will be lining their pockets with commissions. Still not enough? These people make only $425 a month, plus commissions; sounds like all told their incomes range to about $300 a week - about 20% of what U.S. collectors are said to make. It's a tried and true technique: get the exploited to do the exploiting for you, and don't forget to give them a sense of purpose and a belief that they are doing a good thing. And if you can keep them 12,000 miles from the people they are abusing, so much the better.

People should pay their debts, or not incur them in the first place. But aside from the impossibility of predicting a disaster that will render you unable to do so, or the lack of alternatives when you cannot afford basic necessities, there are the usurious interest rates still charged by credit card companies, the exploitative schemes built into the structure of debt (the right of banks to charge you up front for the interest due on a long-term loan, for example), and outright mistakes, such as the ones that many customers say Dell made in their billing. In my experience, a great many so-called debts referred for collection are simply charges that the customer disputes or has already paid, and result from either aggressive billing practices or careless record keeping. None of the collector's concern, though; they get a referral and start cranking out the nuisance calls on their wonderful high-tech automated calling system. It seems to be part of the financial system that you are responsible for taking the time and effort to prove that you paid something, even though it is the company's fault that they do not have a record of it. This is why I don't feel like debt collection is a completely legitimate business. Outsourcing it makes it even worse, adding a glib disregard for ethical norms of conduct and the frustration of dealing with automated calling systems and heavily accented staff. This is not a charming cultural phenomenon to be viewed as a growth opportunity amid stagnation. It's a depressing deterioration of both the moral integrity of U.S. business practices and the ability to reign in the outflow of capital. Why the Times would see fit to do anything but lament this is beyond me.