RILYBOT

Mathematics, psychology and sociology, philosophy.

Saturday, July 30, 2011

Google Plus Sparks as "Circles"

I have been using Google plus for a few weeks now and it seems to have a fairly good design combining the functions of a "reciprocal friends" network like Facebook with a "one-way followers" network like Twitter.

I have seen a number of comments that circles aren't focused enough -- you want to be able to follow people because of your shared interest, not because of all of their interests.

Here's a solution that fits within the design of Google+:

  • In addition to following people, users follow sparks (this is already implemented).
  • If two different users use the same phrase to create a spark, they are following the same spark.
  • When you select a Spark, a "Share What's New..." box appears at the top of the feed. It works just like a normal share (you can select circles to share with, attach links, etc.) but you are advised to only share things relevant to the spark.
  • Anything shared in a Spark will also appear as a post in your stream to your followers see it. The post contains a clickable link to the spark.
  • When you are viewing a spark, you can choose to show or hide the posts/shares that other people have submitted in this spark. This setting persists until you change it, and you can set it differently for each spark you follow.

Tuesday, May 31, 2011

The Transparent Society, Foretold. Part III - The Tweetbowl

2011 May 31st
III. The Tweetbowl

... Because of a quirk of early communications technology, a small group of New Hampshire girls, including me, came of age on a primitive computer network — the Internet before the Internet.    -- Virginia Heffernan (see link [3])

I went to Dartmouth in the early 1980's, during the peak of XCALIBER, an online chatroom that typically linked 15 to 30 students at any time (day or night), including many people at other nearly schools. I introduced a friend to "the con", as we called it, where she soon met her future husband, a student at a distant school. I was skeptical of this new social medium[3] -- it's too easy to cheat at the Turing test -- but my friend's experience and those of many others in my circle made it clear that digital social networking would be powerful and effective.[1]
In part II, I described the Fishman Affidavit and Streisand effect, two examples of the perverse unintended consequences of trying to get between people and their information. In both of those cases, the information wasn't even of much importance or value to the thousands of people who helped spread it. But those were just warm-up acts.
With the advent of online social networking, and the new communications networks that come with it, truly important things can become the subject of grassroots political action -- such as diplomatic secrets or the government of Egypt. But these social networks have also brought us the fishbowl of Asimov's The_Dead_Past.
Celebrities are the pioneers of life in a fishbowl. Few people deserve the type of suffering that results from constant scrutiny (after, probably, leading a "normal" life for many earlier years of non-fame), and privacy laws such as those in the UK aim to help prevent this type of suffering[4].
Persuant to this end, the court provides a service, (to any citizen or corporation who can pay the price), of granting a sort of individual custom-tailored seditious libel protection. This service, a form of court order called a Superinjunction, orders someone (typically the entire population of the U.K. save the Members of Parliament) from talking about something, and furthermore, it orders them not to talk about the superinjunction. It is a gag-order, with a gag-order on the existence of the gag-order. A sort of Double-Secret Probation for the Red top tabloid press, if you will.
Unfortunately for the celebrities, and the courts, and for us, depending on where you stand and how it all plays out when one of these court orders is enacted, it happens to go directly against one of the sole remaining provisions of the Magna Carta, which states[2]:
"We will sell to no man, we will not deny or defer to any man either Justice or Right."
Here is a good summary (on YouTube) of a serious incident from two years ago: the Trafigura affair, from the BBC programme "Have I Got News For You".


A celebrity decided to use this form of "superinjunction" after being blackmailed by a mistress who had no fears or misgivings about making the affair public. Some users of Twitter apparently felt this was a problem, and promptly published the details (well really, just the names, which was all they had to). The vast majority of retweets were positive, i.e. anti-superinjunction, and helped ensure that everyone in the UK know the identity of He-who-cannot-not-be-named.[5] This left the celebrity and his lawyers in the amusing position of having to sue Twitter, but unable to do so without revealing his identity. You'll find it all if you search for "twitter super injunction affair", if you really care.
Celebrity affairs do not really constitute an urgent need of the public good, but something like the Trafigura affair most probably does. As David Cameron said just before I began this three-part article,
The law and the practice has got to catch up with how people consume media today.
Perhaps it is the "old world order" of the 1950's that is unsustainable, in the light of the emerging digital community.


Footnotes

[1] Dartmouth also had BlitzMail, which brought a form of instant messaging for nearly everyone on campus due to its widespread adoption and ease of use. This was just part of an overall strategy Dartmouth had already been following for 15 or 20 years prior to my arrival. Computer use was a requirement in the freshman math class, and ease of use was the primary design criterion of every aspect of the computers at Dartmouth.

[2] From Commons Debates in Westminster Hall, the 17th March 2011 Backbench Business (skip down to "17 Mar 2011 : Column 140WH")

[3] Most people do just fine balancing the risks and freedoms afforded by chatrooms. Virginia Heffernan gives a great account of her experiences with XCALIBER in her New York Times column My Wired Youth, 2008 Feb 3.

[4] The UK court actions in question limit self-expression as provided by Article 10 of the European Convention on Human Rights, when such expression would interfere with "the reputation or the rights of others", or for various reasons that are in the greater public interest.

[5] He-Who-Cannot-Be-Named: This great nickname invoking Lord Voldemort is attributed to Forbes blogger Kashmir Hill

Friday, May 27, 2011

The Transparent Society, Foretold. Part II - Gorby, Bert, and Barbra

2011 May 26th
II. Gorby, Bert, and Barbra

If the broad light of day could be let in upon men's actions, it would purify them as the sun disinfects.    -- U.S. Supreme Court Justice Louis Brandeis

In the late 1980's a revolution in transparent society came from a seemingly unlikely place: Soviet Russia. Most aspects of that period in the USSR's history are very hard to sort out, and clearly the policy of Glasnost put in place by Gorbachev had many effects, both intended and unintended. But the contribution of technology to transparency is fairly clear.
Prior to this time, every photocopy machine in the Soviet Union had been watched by a Communist Party member who approved anything copied on it -- lest it be used as an underground publishing tool[1]. The Soviets had begun to clone the Apple II and the ZX Spectrum and created several of their own designs locally. Cellular phones, satellite television and USENET were becoming more widespread[2],[3]. As all of this technology became common in the U.S. and Europe, the USSR had to keep up. A closed society was unsustainable in the long run, provided that it was to remain competitive in the world.

Information distribution can also bring about a more restrictive society. American proponents of personal privacy and free information exchange, such as supporters of the EFF, were clearly of the opinion that surveillance should be resisted in any way possible. I suppose most of them saw a downside only after 9/11.
Meanwhile, the same Internet that might have helped al-Qaeda plan its attacks in secret also gave us Evil Bert [4]. A curious chain of events led to Bert the Muppet's appearance in a pro-Osama rally in Bangladesh, seen here. The creator of the poster had collected photos of Osama bin Laden online, and neither he nor the other anti-US protesters realized Bert's significance.

Perhaps the greatest impact of widespread communication is exemplified by the reaction to Bert's appearance in Bangladesh. Most everyone was amused by this, except for the al-Qaeda supporters and the Sesame Street workshop. Soon, the whole Evil Bert website and its mirror were taken down -- but of course, it was too late. Attempts to remove a popular Internet meme fail, and usually also backfire.
The same social force that causes the spread of controversial (but deemed humorous) photographs also brings to bear upon issues of ethics and morality. This had long ago been demonstrated by the Fishman Affidavit, which showed that millions of people will copy something they didn't even care about prior to hearing of it, in the face of threats by a large and powerful organization, simply because its distribution is being suppressed for the wrong reasons. This phenomenon eventually came to be known as the Streisand effect, after a 2003 episode in which the celebrity called attention to herself by trying to avoid attention[5]. More on that in Part III...

Footnotes

[1] Los Angeles Times (David Owen), "Who Invented Xerox?", Power to the People: the Photocopier, 2004 Aug 10. Here is the relevant portion:
A telling indication of xerography's significance is that in the former Soviet Union, whose rulers maintained their power in part by monopolizing access to information, copiers were guarded more closely than computers, and individual copies were numbered so that they could be traced....

[2] InfoWorld (Paul Staffo), "Today the microprocessor is more powerful than the gun", 1991 Sep 2. Here is a part:
Within hours of Gorbachev's removal, messages were humming between the Soviet Union and points abroad via telephone and such computer networks as UseNet. One note from Moscow underscored the importance of the link: "Please stop flooding the only narrow channel. We need the bandwidth to help organize the resistance." Elsewhere in the Soviet Union, fax machines were zapping messages among resisters, while in the Baltic states, cellular phones allowed people to keep one step ahead of the events.

[3] I heard an anecdote about Gorbachev's house arrest during the August 1991 coup attempt. The anecdote states that, although Gorby's phone lines had been cut, the coup leaders did not cut off cell phone service and Gorbachev was able to call for help that way. 20 years later, I cannot find a source for this story.

[4] Evil Bert is seen here courtesy of archive.org, almost two years before 9/11. The reference to "the World Trade Center in New York City" refers to the 1993 World Trade Center bombing.

[5] Streisand sued (for $10 million in damages) the nonprofit organization that runs the California Coastal Records Project because her estate (along with every other house on the coast of California) was visible in an image on the website, and had been labeled with a caption contributed, wiki-like, by a website user. As described in a press release after the suit was rejected by the court:
...Streisand grossly overestimated the number of people who would use the caption to download or order pictures of her blufftop estate. In her declaration, Streisand claimed that it was likely that thousands of people had downloaded the frame to view her estate. In fact, prior to the lawsuit, only six downloads of that frame were executed (out of a total of over 14,000 downloads for the site as a whole), two of which were downloads by her own attorneys. Similarly, prior to the lawsuit, only three reprints of the frame were ordered through Pictopia - two by Streisand herself and one by a neighbor who is in a lengthy dispute with her over controversial expansion plans for her blufftop estate.    -- CCRP press release

Tuesday, May 24, 2011

The Transparent Society[2], Foretold


2011 May 23rd

I. The Dead Past

A recent story on the BBC World Service caught my attention. Lawyers representing an unnamed football player in the UK were going after Twitter to "subpoena" records of users who helped spread news of an affair, the mere existence of which could be spoken of within the UK according to a court order.

The privacy laws as practised in England and Wales, and the court's enforcement thereof, are intended to avert some of the Orwellian consequences of the pervasive media environment, by attempting to legislate and enforce a right to personal privacy. (Ironically, the celebrity mistress in question had been in the television programme Big Brother.) In this particular case the consequences were repressive, untenable (or at least "unsustainable"), and often laughably amusing to the point of complete humiliation.

I got my first really good grasp of the multiple issues involved from a short story by Isaac Asimov, which I will attempt[1] to summarize here:

The protagonist is a scientist and polymath, employed as a historian, who has been frustrated in his research on ancient Carthage. Unable to get viewing time on the government's Chronoscope (a massively expensive machine enabling one to view events in the past as if watching them live on TV), he begins conducting his own research and soon works out a way to build his own Chronoscope in his basement.
   He does so, and is met with three challenges in succession. The first challenge is that the machine does not work as expected. Beyond about 100 years in the past, the images are completely washed out by static and nothing useful can be seen.
   While attempting to work out flaws in the design, the inventor is confronted with a second challenge: his wife, who has yet to accept the tragic loss of their young child a few years earlier, starts using the machine to relive his brief life.
   He experiences the third and greatest challenge when, while he is preparing a paper on his Chronoscope work for the academic press, he discovers that the government is tracking his activity and trying to block publication.
   The government's supernatural ability to know what he is doing drives him to paranoia. After a few plot twists, and deciding to destroy his own chronoscope after his wife is reduced to dysfunctional obsession watching their dead child, he manages to get his manuscript out to several journals. Mere hours later, police surrounded his home and he is confronted by government agents.
   They explain to him how they were able to stalk him so effectively, and simultaneously reveal why the government's Chronoscope is so tightly controlled: The real flaw of the Chronoscope comes not from trying to watch the distant past of 2000 years ago, nor from watching a baby take his first steps 10 years ago -- but from watching events of one second ago. The Chronoscope allows anyone to instantly see everything that is happening, anywhere the viewer decides to look, violating every type of privacy with complete impunity.
   The scientist then reveals that he had succeeded in getting his paper to the journals some hours ago, and all parties, now feeling shared remorse, realize it is too late to stop publication. Soon everyone in the world will have the ability to spy on the present and past lives of everyone else. The agents wish the scientist a "happy fishbowl" and together they toast the dead past -- the simpler life that will never be again.
    -- (The Dead Past, by Isaac Asimov, here paraphrased from my memory[1])

I was initially attracted to this story because of its depiction of the academic community. Like the protagonist, I enjoy invention, research, and combining ideas from diverse fields. I was bothered by the story's depiction of a society that placed severe limits on academic research (particularly interdisciplinary research, which was effectively banned). The twist ending was a good contrast to some of the other futurism-related books I was reading at the time, such as Future Shock, 1984, Brave New World and the like.

A year or two later I learned about RSA encryption through the original (August 1977) Scientific American article. My main interest here was the calculation of large integer exponents, and I was disappointed they didn't explain the arbitrary-precision arithmetic in detail. The article discussed some of the applications of one-way or "trapdoor" codes, facilitating a certain type of privacy that would clearly have significant impact in the areas of spying, criminal activity and law enforcement. I also remember wondering at the time whether these codes could make it easier for someone to broadcast a secret without being caught. There was no clear answer.

Perhaps the biggest influence on my thinking about future social and political change came from Microtechnology for the Masses, the article[3] by Jon Roland for The FUTURIST magazine in 1979. This article explored the impact of an extrapolated Moore's law on most aspects of life and society. It correctly predicted many details, such as a worldwide data network accessed through dators, hand-held devices recognizable as today's smart phones. Possessing greater computational power than the fastest computer of the day[4], and a dator would be:

... a universal personal accessory that will be more important in our daily lives than the clock, the telephone, the typewriter, television, the calculator, the recorder, the copier, the checkbook, the camera, mail, books or files, because it will replace all of these things.

This article also left out a lot, but the creative reader could extrapolate and fill in many details. It was easy to predict an end to the sale of recorded music; somewhat harder to see whether the "datornet" would foster or discourage crime.

(Parts II and III are planned within the next few days)

footnotes
[1] I have not read the story in 30 years, but the memory seems clear enough. I'll have to find my copy to see how much I got right...
[2] See The Transparent Society and Nineteen Eighty-Four.
[3] Jon Roland, The Microelectronic Revolution. The Futurist, April 1979.
[4] The fastest computer at the time of the Futurist article (April 1979) was a Cray-1. A present-day smartphone is at least as fast; see [5].
[5] Michael Croucher, Supercomputers vs Mobile Phones, ("Walking Randomly" blog article), 2010 Jun 2.

Saturday, January 22, 2011

I Think I'm Learning Japanese

(I Only Think So)

2011 Jan 21st

"Munafo" = "Intellectual Prince"?

(from usokomaker.com/yoji, a novelty yojijukugo generator)

For various reasons, Japanese language and culture have interested me throughout my life. (A few of the reasons, like トトロ, are commonly known and will be familiar to readers. Others, perhaps less so (possibly NSFW themes and language).)

For many years I only knew a few basic facts: there are three alphabets, all derived originally from Chinese, plus the Hindu-Arabic numerals and our own Latin alphabet, and you pretty much have to learn all of them to get along in daily life. The pronunciation of two of the alphabets is simple and logical, but the third (kanji, the Chinese characters) includes thousands of special cases with little structure or pattern.

The depth and complexity of Japanese, for which it is justly famous, have kept me from doing much more -- until just recently, when I have decided to adopt a religion [1] and consequently travel to Japan to visit the head temple.

When I travel to a place that speaks a different language, I want to be able to read and write certain basic things: numbers, times and dates, the address(es) where I will be, etc. Apart from being prepared (the State Department emphasizes the importance of such things), it is fun, and seems to me an act of basic courtesy to try to learn at least a little bit of the local culture.

My ordinary way of learning involves a lot of computer and Internet use. Methods of entering Japanese katakana and hiragana are relatively direct and obvious. You can type totoro to get トトロ (in katakana, as this is a nontraditional proper name), or fujisann to get ふJiさん or 富士山 (the native pronunciation and spelling, respectively, of the famous mountain overlooking the temple I will be visiting [2]

entering katakana entering katakana     entering hiragana entering hiragana

However, typing in katakana and hiragana is only good enough for looking up proper names. The majority of Japanese writing uses the kanji extensively. Most kanji have at least two pronunciations and multiple meanings. In addition, the partial redundancy [3] of hiragana and kanji means that there are usually two or more ways to write any given word or phrase.

When learning any language there are 4 things to learn (listening, speaking, reading and writing). Whereas in most languages this corresponds to 4 skills, which (for the adult learning a second language) can initially be approached through transliteration and translation respectively, the Japanese situation with three alphabets, two or more pronunciations and two or more spellings makes it more like 10 or 12 skills.

The learning curve is a seven-dimensional manifold.

These 10 or 12 skills are inter-related and interdependent. You don't know which way something will be spelled or pronounced, so it is important to learn both (all) of the alternatives.

Let's Just Learn the Writing

At this point I thought -- Perhaps speech and listening/understanding can be put aside for the moment -- what if I put the spoken skills aside and focus just on reading and writing?. I should still be able to look up Japanese kanji in a dictionary or on Google to find a definition. But that presents another problem, which may strike computer-savvy readers as uniquely odd or even impossible:

In order to be able to type in a kanji character, one must know either how to say it or how to write it.

And by "write", I mean nothing less than the ancient traditional art of brush-and-ink calligraphy.

As it turns out, thousands of years of experience have led to a greatly standardized stroke order for each of the thousands of commonly-occuring characters, and the system is so useful that it is identical in all of the cultures that use the characters (primarily those who speak some form of Chinese, Japanese or Korean). The stroke order gives rise to a convenient and efficient software optimization for handwriting recognition, which can be adopted and used by all native speakers because no-one writes any of the kanji any other way.

So in order to so the most basic and modern task (say, looking up "富士山" on Google Images) I need to learn one of the most ancient and decidedly non-modern tasks (how to paint "富士山" with a brush!).

Entering a Kanji without knowing its pronunciation

(showing a partly-entered "億" (おく, "100,000,000")

This need to know stroke-order in order to look things up in Google is both a curse and a blessing. Simply being able to produce an accurate drawing of the character is not good enough. You have to draw each line in the correct order. This can be very frustrating for beginners, but that is far outweighed by the benefit: one can practice and learn Kanji writing just by trial-and-error in the computer interface.

An Epiphany

It was sometime in the afternoon on January 10th, stumbling through a few of the dozens of equally unlikely ways of writing "蓮" (れん, "lotus"), that I had the stunning insight: Each part of each kanji has its own specific writing order, and is always drawn the same way each time it appears. In this particular case, for example, I can start by learning how to write the "車" (くるま, "car") and the "辶" (チャク, "walk" [4] simplified as radical 162), and the first of these uses "日" (にち, "sun" or "day"). These smaller building blocks each have far fewer possibilities to try, and once I learn them I not only have a fighting chance at drawing "蓮", but am also much more prepared to use any other character that uses any of the same parts (such as "億", shown above, which also uses "日").

This is of course only the second or third thing anyone is taught if they study Chinese writing the "proper" way (like, say, from a book or a teacher). In fact, a friend told me about this in 1982, when I first got curious about Chinese writing. But it kind of slipped my mind somewhere along the line, and it was really cool to figure it out (again) on my own. These kanji building blocks (many of which are "radicals", but many are not) are like little graphical subroutines -- very appealing to my computer programmer aspect.

Footnotes

[1] adopt a religion : I do not proselytize, but if you are curious it is Nichiren_Shōshū. The reasons it appeals to me are the relative peacefulness (and lack of political dictation) of Buddhism in general combined with the prominent role of large numbers [5] in the most important source text, the 16th chapter of the Lotus Sutra. (There is a widely available translation by Burton Watson).

[2] fujisan : Note that fujiyama (ふJiやま) is a common Western mistake: 山 is usually やま but not in this case.)

[3] redundancy : The written alphabets, including the kanji, came to Japan after there was already a distinct Japanese spoken language. The kanji were used wherever a Chinese character (or combination) was directly suited to represent a word. Often, but not always, the Chinese pronunciation was used for the Chinese character. Anything for which there was no word in China, including Japanese conjugations and declensions, etc. had to be represented with extra letters representing their sound (phonetic value). Since the kanji have a pronunciation (and usually two: Original Chinese and Japanese), they too have a phonetic value, and one could just use just a phonetic alphabet. Often you see both: little kana written above or next to the kanji. There are many situations in which that is preferred (writing by or for young or uneducated readers; texts that would otherwise use rare or obscure kanji, instant messaging, etc.) but the reader cannot count on it.

[4] : This is a derivative of "辵", which my Kanji dictinary does not know about (probably because it is not taught in schools). I like it a lot better than the modern alternative, which seems to be "歩" (taught in grade 2) because it puts the "steps" (彳, "walk" in an idiosyncratic form 彡) before the "stopping" (止)

This character reveals some of the flaws in modern integration of computer technology in our culture. Your computer might show "辶" with three strokes or four:

Compare the page title (top) with the article title.

Wikipedia's article on Hyōgaiji notes:

A related weakness (though less relevant to modern language use) is the inability of most commercially-available Japanese fonts to show the traditional forms of many Jōyō kanji, particularly those whose component radicals have been comprehensively altered (such as [...], and 辵 in 運 or 連, rather than [the traditional form used in 迴]). This is mostly an issue in the verbatim reproduction of old texts, and for academic purposes.

These old and/or rare characters (hyōgaiji) are of great interest to me, as the primary application of my Chinese and Japanese learning will be research on large numbers [5].

[5] large numbers : The Chinese Buddhist quantity "阿僧祇" (Sanskrit asaṃkhyeya, Japanese asōgi) means "incalculable" or "innumerable". As a number it can mean anything from 1056 (in common modern usage, see Wikipedia "Chinese numerals") to 10140 (see asaṃkhyeya) to 107×2103 = 1070988433612780846483815379501056 (as seen in the Chinese Wikipedia article on Chinese numerals, item 103 of the long list in the "大數系統" section).

But there are far larger numbers, on the order of 10↑↑(105×2120) where ↑↑ represents the hyper4 operator or iterated exponential function. That is a "power-tower" of 10's a googolplex of 10's high. See novoloka's article on Avatamsaka numbers and go down to the note at the bottom. Note the description that begins with "The first four verses of this poem are most challenging. They apply a superexponential iteration over an exponential one." For more detail see their article measuring the asamkhyeya.

Friday, October 29, 2010

The contraction of curricula and galaxies

A reader asked me what I thought about the limits of defining large numbers.

Such discussions begin with specific arithmetic operations and mathematical symbols in mind, and usually focus on comparing one system (such as Conway's "chained arrow notation") to another (such as "Bowers' extended operators"). The choice of symbols and operations affects how high one can go, and such discussions usually devolve into competitive games, the limits of which are fairly well handled by the Turing machine and the Lin/Rado "busy beaver function".

But such discussions usually come out of a more universal question, which regards the limits of human thought and perception in general.

Limits of human thought and perception are apparent throughout the history of numbers and mathematics. After a survey of early human developments (such as is presented in the nearly exhaustive "Universal History of Numbers" by Georges Ifrah, ISBN 0-471-37568-3) one might notice some patterns:

  • Perception and understanding are limited by the symbols in use and the concepts they represent,
  • Mastery of a given set of concepts leads to invention of new symbols and concepts.
At any point in history, or within any specific culture, there is a specific set of ideas and symbols which creates (or perhaps reflects, or both?) a natural limit of the capacity of the mind to perceive (say) large finite numbers.

It has been the trend throughout our history that the intellectual developments of earlier generations become assimilated into the body of common knowledge and added to the standard educational curriculum. As new material is added, earlier material is often compressed and taught (usually with greater efficiency) in a shorter period. So it is that the most advanced arithmetic of the early Babylonians is surpassed by that learned by today's 8- and 9-year-old students, and most of the algebra techniques of 9th century Arabia are (typically) learned by 13- or 14-year-olds today, and so on. Both are aided by more recent developments (Indo-Arabic numerals aid arithmetic; certain new teaching methods address the abstraction of variables in algebra, etc.)

Speculating about the limits of the human mind (or brain, for reductionists) can lead to discussions that test or challenge religious beliefs. I suppose the majority opinion in most cultures would state that the human mind has some kind of ultimate limit, which can be compared to the limited physical size of the human brain. (Such a conclusion helps to distinguish believers from God, avoiding blasphemy).

A universe, assuming it is also limited in size (or a visible universe as limited by an event horizon or light cone) would therefore also have a finite limit.

The development of our culture over thousands of years is a bit like an expanding light cone. The contraction of the curriculum into ever-shorter stretches of childhood is like the Lorentz contraction of galaxies known to be much further away, and therefore seen in a remote past, when the universe and the visible universe (our view of the world and the sum total of knowledge) were both much smaller.

Friday, May 21, 2010

A random sampling of my Google queries

On the weekend of the Pac-Man doodle, a friend asked me if I had "Googled anything lately", intending simply to help me discover the playable throwback game (which at the time had sound).

Misunderstanding his request, I prepared the following snapshot of my recent search activity.

Thursday, May 20, 2010 (yesterday) 10:32 PM: "system preferences" network "dns servers"

I was finding out how to use the Google DNS servers, which were indicated on a discussion forum as a way to fix a problem with certain YouTube videos not loading.

3:49 PM: "the band" discography "pepote"

I like to have accurate date tags (for popular music, the year it was on the charts; for classical, the year of debut performance). Here I was filling some missing dates. Many albums such as greatest-hits compilations are tagged with a re-mastering or re-release date, which is meaningless for me.

Wednesday, May 19, 2010 11:26 PM: when the tide comes in all of the boats rise

A metaphor that one of the men in MDI likes to use; he had T-shirts made, and today I wondered where the quote came from. It was originally used by JFK in 1963 when he was promoting spending federal funds on the Greers Ferry Dam in Arkansas.

11:16 AM: Coercive Persuasion "foot in the door" "love bomb"

I frequently look into new ideas and concepts relating to sociology, and one area I often write about is the balance of power between the individual and the group. Here I was trying to find an old reference that I had lost.

1:04 AM: perl bigint

Discovering how to use the Math::BigInt library, which allows arbitrary-precision calculation in Perl.

Tuesday, May 18, 2010 11:53 PM: pdflatex atsutil and MacTeX

I was getting my TEX typesetting system up and running on the Mac Pro.

9:23 PM: Islands of Adventure Harry Potter

Learning about the new theme park area that is opening next month.

5:52 PM: xkcd forum playpen balls

There was an xkcd cartoon in which someone filled their room with those brightly-colored baseball-sized hollow plastic balls, I wanted to find the discussion that would reveal whether such a thing was practical (best price: roughly $8000 for a typical size bedroom).

5:20 PM: sloane integer sequences

I use this site a lot (The On-Line Encyclopedia of Integer Sequences) and this time I was too lazy to find the link in my bookmarks page, so I did a Google search instead.

2:46 PM: translate portuguese

I used Google to try to figure out how to say something in Portuguese.

2:33 PM: 19111438098711663697781258214361

This number is the first in a set of consecutive prime numbers where the difference between each one and the next is the same number (in this case, 7 primes with a difference of 210), called a "CPAP". It is one of the entries on my numbers page, and I was looking to see if it was still the record-holder for smallest CPAP-7.

Monday, May 17, 2010 5:55 PM: ffmpeg me_method dia_size

Solving a problem with the program I use to convert JPG files to MP4 video for YouTube uploads (mainly for my Gray-Scott simulations). YouTube does not like the format of the ffmpeg output (the atoms are not ordered properly for streaming) and directs users to a help/support page that is entirely irrelevant because it only addresses iMovie, Final Cut, or QuickTime.

2:59 PM: Jefferson Airplane discogs

Finding more dates of old music.

1:54 AM: ezekiel chronology and 360 days prophetic year , etc.

Filling a few details in the entry for 945000 and some related entries on my numbers page.