Monday, July 24, 2017

Human-level intelligences and you

There has been much ado over the years about computers becoming as intelligent as humans. Several goals have been set up and surpassed, and for each feat of computer engineering we have learnt that intelligence is a slippery thing that requires ever more refined metrics to accurately measure. Beating a human in chess was once thought a hard thing to do, but then we built a computer that could do it - and very little besides it. It is a very narrowly defined skill being put to the test, and it turns out intelligence is not the key factor that determines victory or defeat.

Fast forward a bit, and we have computers giving trivia nerds a run for their money in Jeopardy. Turns out intelligence isn't the defining factor here either, on both sides. For computers, it's all a matter of being able to crawl through large amounts of available data fast enough to generate a sentence. For humans, it's a matter of having encountered something in the past and being able to recount it in a timely fashion. Similar tasks, indeed. but neither require intelligence. Either the sorting algorithm is optimized enough to get the processing done on time, or it is not. Either you remember that character from that one soap opera you saw years and years ago, or you do not.

The win condition is clearly defined, but the path to fulfilling it does not require intelligence proper. It can go either way, based on what basically amounts to a coin toss, and however you want to go about defining intelligence, that probably is not it.

The question of computers becoming as intelligent as humans has ever so gradually been replaced with an understanding that computers do not have to be. In the case of chess, a specialized dumb computer gets the work done; the same goes for other tasks, with similar degrees of dumb specialization. Get the dumb computer to do it really really way, and the job gets done.

If all you need is a hammer, build a good one.

A more interesting (and more unsettling question) is when a human becomes as intelligent as a human. This might seem somewhat tautological: 1 = 1, after all. Humans are human. But humans have this peculiar quality of being made, not born. As creatures of culture, we have to learn the proper ways to go about living, being and doing, And - more to the point - we can fail to learn these things.

Just what "these things" are is a matter of some debate, and has shifted over the years. A quick way to gauge where the standards are at any moment in time would be to look at the national curricula for the educational system of where you happen to be, and analyze what is given importance and what is not. There are always some things given more attention than others, some aspect promoted above others. And, at the core, some things are deemed to be of such importance that all citizens need to know them. Some minimum of knowledge to be had by all. Some minimum level of intelligence.

And there are always a number of citizens who do not qualify. Who are not, for any given definition of intelligence, up for it.

When does a human become as intelligent as a human?

Friday, July 14, 2017

Some words on media permanence

It is a strange thing about media artifacts that some of them age well, while others do not. Some can be forgotten for decades, only to find a new audience willing and able to engage with them. Others can not be revived as easily, and are thus consigned to reside only in the memories of those who were there at the time.

To be sure, this applies to things that are not media artifacts, too. Things happen, and after they have happened you were either there or you were not, and your memories of the event are shaped accordingly. It is a very important aspect of the human condition.

But the point of media artifacts is that it is possible to return to them at a later date. They are supposed to have some sort of permanence - it is a key feature. Books remain as written, pictures as pictured, movies as directed. It would be a substantial design flaw if these things did not last.

Though, then again, some things do not last. Books fall apart, movies fade, hard drives crash. Entropy is not kind to supposedly eternal things. Look upon these works, ye mighty.

But. All these things aside: some media artifacts age well, and some do not. Some can be readily introduced to new audiences, while others remain indecipherable mysteries even upon close encounter. There is a difference, and it is very distinct from the question of whether or not we're trying to jam a VHS tape into a Betamax player.

This difference can be clearly seen if we contrast Deep Space Nine and Babylon 5. Despite being from roughly the same time, belonging to the same genre and sharing a non-insignificant portion of plot elements, one of these television series is instantly accessible to contemporary audiences while the other is not. Though it pains me to say, it takes a non-trivial effort on the part of those who are not nostalgically attached to Babylon 5 to view it with contemporary eyes. A certain sensibility has been lost, and the gloriously cheesy CGI effects turn into obstacles to further viewing. Surmountable obstacles, but obstacles nonetheless.

The same goes for computer games. I imagine that, should we use the Civilization series as a benchmark, there would be different cutoff points for different audiences. For my part, the first iteration is unplayable, and I suspect many of my younger peers would balk at Civilization 2. I also have fears that 3 or even 4 might be too much of a learning curve for those who were not there to remember it. Not because the games are inherently impossible to play, but because the contemporary frameworks for how games are supposed to work (and how intuitive user interfaces are supposed to be) have shifted between then and now.

A certain sensibility has been lost.

It would be a mistake to label this development as either good or bad. The young ones have not destroyed theater by their use of the lyre, despite all accounts to the contrary. These changes are simply something that have happened, and have to be understood as such. Moreover, it is something to take into account as yet another generation grows up in a society overflowing with media artifacts, old and new.

Some of these artifacts will constitute shared experiences, while others will not. Such is the way of these things.

Saturday, July 8, 2017

Care for future history

These are strange times.

Since you're living in these times, the above statement is probably not a surprise to you. In fact, it might very well be the least surprising statement of our time. Especially if you happen to have a presence on twitter, and even more so if this presence is in the parts where the statement "this is not normal" is commonplace, or where a certain president makes his rounds. The two are related, in that the former refer to the latter: it is a reminder and an incantation to ensure that you do not get used to these strange new times and start to see them as normal.

These times are not normal. These times are strange.

In the future, there will doubtless be summaries and retrospectives of these times. More than likely, these will be written with academic rigor, historical nuance and critical stringency. Even more likely, all the effort put into making these retrospectives such will be made moot by this simple counterquestion:

Surely, it wasn't that strange?

We can see this future approaching. Less strange times will come, and frames of reference will be desensitized to the strangeness of our time. In a future where it is not common for presidents to tweet at the television as if encountering the subject matter for the first time, the claim that there once were such a president will be extraordinary.

Surely, it wasn't that strange?

It behooves us - we who live in these strange times - to leave behind cultural artifacts that underline and underscore just how strange these times were. Small nuggets of contemporaneity that give credence to the strangeness we ever so gradually come to take for granted. Give the future clear direction that, yep, there is a before and an after, but not yet, and we knew it.

It is the implicit challenge of our time.

Better get to it.

Sunday, May 21, 2017

Concerning the Dark Souls of US presidencies

It has been said that the current president is the Dark Souls of US presidencies. Which, to be sure, has a certain ring to it, but it lacks the virtue of truth. Let's explore the issue for a spell.

Dark Souls is a series of games built around the notion of gradual player progression. The games might seem hard at first, but if you stick with it you learn how to overcome that difficulty and become good at what the games ask you to do. The difficulty is not mechanical - the challenges do not require superhuman reflexes or superior skills to overcome - but rather psychological. By failing, again and again, the player gradually learns what needs to be learnt. The reward for this application of patience is the opportunity to excel whenever new situations arise that require the very thing just learnt. It is the player leveling up, rather than the player character.

Meanwhile, in the background of all this character development, a world and its long tragic backstory is ever so subtly unfolding. It is not a simple backstory, where this happened after that, but a series of subtle implications of social relations and emotional states of mind. Complex social processes led to cascading catastrophic outcomes which in turn sparked other social processes which -

It is a deep and complex backstory, and for the sake of brevity, it will all be ignored. Suffice to say that much of it is left unsaid, and that the player will have to piece it together from archeological fragments, old legends and features of geography.

From this description alone, you might see what I'm getting at. Gradual self-improvement through patience, slowly unfolding understanding of past events through contextual knowledge, and the characterization of subtle states of mind - neither of these things are applicable to the current president, even with excessive use of shoehorns or cherrypickers.

There probably is a past president that would live up the title of the Dark Souls of US presidencies. But that is a topic for another cycle.

Friday, May 19, 2017

My computer broke down, can you learn it?

With the recent update to Windows being in the news (not in small part thanks to a computer-eating virus which eats non-updated versions), I've been thinking about how knowledge is situated. Which might seem like a strange connection to make, until you are confronted with this question:

"My computer broke down, can you fix it?"

This is a very common situation to find oneself in, especially if one has acquired a reputation for being able to fix computers. (Even if it only came about from helping someone change the background image that one time.) The knowledge required to navigate this situation is not, however, primarily related to computers. Mostly, it comes down to knowing the asker, their general level of computer literacy and the problems they've asked you to fix in the past. It is a very particular skill set, and over time you develop it through use and abuse.

The aforementioned recent update seems to have crashlanded a fair number of systems, if anecdotal evidence is anything to go by. This made me think about whether I could fix my system if it went down as well, and after poking around for a bit (and making an extra backup of all the things for good measure), I figured that I probably could, given time.

If someone were to ask me to fix the very same problem on their system, I probably couldn't. Not because of my admittedly limited skill in these matters, but because of the different situations in which the problem is situated. If it's just me and my broken computer, then I can take my time, tinker with it, fiddle with the knobs and overall do things that are not directly goal-oriented but which nevertheless gets to the point eventually. It'd be a learning experience, albeit a terrifying one.

If it's not just me, then a whole host of other constraints and situationally specific conditions apply. For one thing, the asker might not have the patience with me learning on the job; they might want the situation dealt with and gone, and me taking my time is the opposite of that. There's also the added element of risk - tinkering is never 100% safe, and accidentally making the problem worse is equally the opposite of the solution. Being risk-averse is good, but it is also slow (yes, even slower), which overall is not conducive to getting things done in a brisk manner.

The point here is not that computers are fragile (though they are), but that knowing something rarely is a yes/no proposition. Mostentimes, we know something sufficiently well that if we were to try it out on our own we'd probably turn out all right, more or less. More often than not, the things we know stem from some first attempt that went in an orthogonal direction from well, but which nevertheless sparked the learning process that led us to where we are. We tinker, we fiddle, and eventually we figure things out.

Though, to be sure, having someone around who you can ask about these things as you go along learning speeds things up immensely.

Do be kind to their patient hearts.

Monday, April 3, 2017

Automated anti-content

So I was thinking about bots in microblogs today, and it occurred to me that they have the potential of being pure anti-content. A realization which, when stated in these terms, raises two questions. The first is "microblog, really?", and the second is "what is this anti-content you speak of?".

To answer the first question: yup, really. It's faster than describing a subset of social medias that are defined by short messages visible for a short period of time, mainly in the form of scrolling down the screen in real time. Gotta go fast.

The second question is more interesting. "Content" is a word that describes some kind of stuff, in general. It doesn't really matter what it is - as long as it is something and can fit into a defined media for a defined period of time, it is content. A person screaming into a mic for twenty minutes is content. It is as generic as it gets.

Anti-content, then. It is not generic, but is is also not original. An example would be the UTC time bot, which tweets the correct (albeit non-UTC) time once an hour. Another example is the TootBot, which toots every fifteen minutes. It is not content, but it is definitely something. You are not going to enthusiastically wake your friends in the middle of the night to tell them about the latest UTC update (though you might wake them about the toot bot), but you are going to notice them when they make their predictable rounds yet again.

Anti-content is not content. But it is familiar.

The thing about humans is that they like what is familiar. It provides a fixed point of reference, a stable framework to build upon, and - not to be underestimated - something to talk about. Stuff happens, stuff changes, but you can rely on these things to remain the same. And because they remain as they are, they can be used to anchor things which have yet to become and/or need a boost to get going.

Or, to rephrase: you can refer to them without accidentally starting a fight. After all, they have nothing to say worth screaming about. They are anti-content. And they are a part of the community, albeit a very small part. They say very little, but every time you see them, they remind you of the familiar.

And now that you have read this, you will never look upon these automated little bots the same way again. Enjoy!

Tuesday, March 28, 2017

Free speech vs rational debate

An interesting thing about the most vocal defenders of free speech at all costs is that they often conflate free speech and rational debate. Which is a strange thing to do - if you argue something with loudness and extreme forwardness, the least that could be expected from you is that you know what you are on about. Yet, somehow, free speech maximalists often show a brutal lack of understanding of the difference between rational debate and free speech.

To illustrate the difference, I shall describe a case where it is not rational to engage in public debate, and where the debate itself has detrimental effects to the society within which it takes place. The debate in question is whether it is the right course of action to exterminate a specific group of people.

For those who belong to this specific group, it is not rational to participate in such debates. The most immediate reason is that you might lose. No matter how unlikely, the mere possibility of losing is reason enough to stay clear of such debates. To the proponents of the extermination policy, your participation in the debate is an additional justification for their point of view. "They can't even defend themselves!" they'd claim, and then move from word to action. Perhaps not immediately, but eventually the final day would come.

The tragic part is that you would lose even if you won. If you won, it would most likely be because you gave reasons for why your extermination is a bad idea. These reasons might be good in and of themselves, but there would be a finite amount of them, and with enough journalistic efficiency these reasons could be summarized into a list. From the very moment the debate ended, this list would constitute the reasons society abstains from exterminating you.

The existence of such a list would constitute an opening for those who favor your extermination. One by one, the proponents could work to undermine these reasons, until they are no longer seen as sufficient reasons for abstaining. The debate would reopen, and you would find yourself in a weaker position than last time around. You would yet again have to defend your right to exist, and you would have to do it using an ever shrinking range of possible arguments in your favor.

Needless to say, this process would continue until there are no reasons left. And then the proponents of your extermination would have won.

This is detrimental not only to the group targeted for extermination, but also for the society as a whole. For each round of these debates, the society would slip one step closer to enacting genocidal policies. Which, to any decent and moral person, is not a desirable outcome.

The rational thing to do in order to avoid such an outcome is to simply not have these debates. Exorcise them public discourse, and keep them off the realms of possible topics. Do not entertain the thoughts, shun those who persist in proposing them, ban them from polite conversation. Keep the opinion marginalized. No good outcome can come from having these debates, and thus the rational thing to do is to simply not have them.

Free speech maximalists want to have these debates anyway, in the name of free speech. But they conflate free speech with rational debate, and as you have seen, there is a very concrete case where these two things are mutually exclusive. If they are to be honest to themselves, they will eventually have to make a choice between one or the other.

If you began reading this post with the opinion that we should have these debates anyway, and still hold that opinion, then I want you to be fully aware of what you are proposing. I fully trust that you will, in your own time and on your own terms, make the rational choice.