week 10 summary

This week I’ve been trying to coalesce the readings for this block into some sort of working theory. I realise that I’ve been skirting around the edges in what I’ve written; this is probably because I flatter myself that I’ve understood what they mean essentially (expect in the case of Haraway, where I thought getting to the edge was pretty good going). I’ve been musing a lot about what this would look like played out in education, and what the uniting of meaning and matter…ahem…means for online learning. I’d like to apply this, if I can, in my assignment…although my crazy visions of sending material artefacts through the post is probably dispensable…

It also links to something a bit more complex. Jacques Derrida seems to keep coming back to haunt me, and the attacks on Platonic forms (or separation of subject and object) recalled Derrida’s insistence that words don’t have a finite number of dictionary meanings (even if accepted by a culture using the language); rather, each time a word is used, its meaning is unique to that occasion (uniterable). This seems to be very much connected to the idea of ‘chair’ (subject) versus the six different chairs in my house (objects). The chair at the top of the stairs may look exactly like the one with the laptop on it, but they are not the same object…and therefore, you cannot use the same subject to refer to both. There are many interesting ways this thought could go, but two occur to me that are linked to our topics this week:

As I mentioned in my notes on this week’s readings, this is important to Edwardsian ‘experiments’ that the topic explored is perceived as new, even if the ‘experts’ involved have done the experiment many times before. The uniqueness has to do with the uniqueness of the participants, which is built into the [concern, or whatever the 'thing' is], but also (in proper post-human style) the uniqueness of everything else involved as well–the uniterable chair.

I think this also shows Pedersen’s investigation of the lines between human and not-human, and why they need to be drawn in the first place, from a slightly different angle. First, if a human is not iterable, then a clone is as ‘human’ as its genetic twin (that sorts out Never Let Me Go, not to mention The Island). But it goes much further: without iteration, everything is unique. There aren’t any dichotomies because there aren’t any categories. Now I’m not suggesting taking this down the slippery slope of, e.g. Douglas Adams’ ruler of the universe. Going back to language, we still use this to good (not perfect) effect in all its uniterability. And so we still need to see the difference between a hairdryer and a handgun. But it’s the right to draw lines between categories that is to be relinquished to make way for the liquidity and ignorance that have always been there anyway.

Comments Off

virtual vagrant

Costa Coffee O2 Wifi registration page

This week began with our home broadband dying, making me a sort of virtual vagrant on my study day…I have to confess that I avoided the university library because 1) I needed to be able to answer my mobile and 2) undergraduates. I also avoided the office because 1) work. So where was I to go in the Durham snow? Coffee shops, natch. I’d already given all my personal details to Caffe Nero’s free wifi provider, but had some more to do online after I’d finished the coffee, muffin, cup of water… So I went over to Costa. It’s hard to take a picture of an iPad with a phone while trying to look nonchalant and not knock over your coffee…but someone else has done it, so here it is. After submitting, you are also asked for your name, birthdate, postcode and Costa loyalty card number. I faked the middle two, left the third blank and signed up…but the server was down. Sat drinking second large latte of the afternoon…checked email with husband’s 3G…tried wifi again…started writing this…coffee almost gone…wifi started working…couldn’t sit in full cafe any longer with empty mug. Gave up and went home.

I wasn’t particularly surprised by the amputated feeling that five days of no home internet gave this particular cyborg. I think it was more the non-monetary cost of getting online…exchanging information about yourself for a service seems more invasive than exchanging money (although exchanging money over public wifi feels pretty invasive too!), especially when the service is advertised as ‘free’ and the purchase of mildly overpriced, non-essential food items is required before one can avail oneself of the service to begin with. And this all assuming that you’ve got a wifi-ready device about your person…

1 Comments ,

week 10 readings

Another collection of thoughts on the readings…hopefully toward informing my assignment!

has no subject

As I was reading Edwards and Pedersen, and trying to connect them to the week 9 readings as well, I got in a bit of a muddle. Edwards seems to want to envision a ‘thing’ without a subject. I could think of three possible interpretations: 1) I could be a Montessori student and pursue an interest and let it lead in all different rhizomic directions involving other people, objects, ideas, information, etc.; but then I would still be the centre–I’m pursuing what I find (subjectively) interesting to me and following those tracks. 2) We could have a group of interested people pursuing something, but then the group (or perhaps its convener) would be the central subject, following their collective and/or diversified interests. 3) We could look at these two scenarios another way, and suggest that the individual or group could be de-centred in favour of the interest/topic–this is the central ‘thing’, because it’s not really a thing, but an amalgam of the people, information, locations, objects, times, ideas that go into its thing-ness.

But how does this change what we do? This is where I got stuck because it seemed to me that, if you try to bring down subjectivity (which Edwards does grammatically as well as philosophically throughout his article), you’re bringing down agency as well. To me it seemed that, to not act as a subject, I couldn’t be consciously pursuing anything. I can’t escape my own subjectivity.

Clearly this isn’t Edwards’ goal. What I assume he (and Pedersen as well) mean is that we need to revolutionise our perception of our subjectivity by acknowledging both total inter-relatedness and the false dichotomy between meaning and matter.

Okay, so I can go pursuing my interests and ‘experimenting’ as Edwards says, I’ve just got to be constantly aware of the never-ending network to which my experimental topic is attached, and to avoid trying to disentangle abstractions from objects…

becomes the object

What annoyed me about Edwards was that, apart from the grammatical gymnastics, the whole article was (as far as I could tell) a total abstraction. There was not a single example of what he was talking about, not one tentative application to real life, not even a ‘this is what not to do’ scenario. I could guess what he was getting at, but this mostly came from prior knowledge of other educational theories–and not terribly new ones; was his idea so radical that it couldn’t be described in any way other than abstraction? (Pedersen, on the other hand, was the perfect foil–taking time to tell two stories that informed the later abstractions.)

So, while my concern about individual/group agency, as far as interest-pursuit, was put to rest, I was still not clear on what this would look like on the ground for learning. I could imagine Edwards abolishing all universities and putting public funds towards everyone making their interest-pursuit experiments public. Or I could imagine him making employers give their workers weekly times to learn something new related to their jobs. Somehow, though, I couldn’t shoehorn his ideas into a more traditional classroom.

Even Angus et al, while a good example of teaching students how to perceive the world in this way, were still instructors of a certain course at a university. If their subject was American literature 1700-1850…or nuclear physics…they might have struggled more to follow the rhizomes and cover the topic as their department expected and relinquish the authority of the instructor…all of which they had to contend with as it was. The context itself dictates that one person(s) tells other people what kinds of things they need to engage with.

tyranny of self?

This brings me back to the question of agency. But the problem now is: how do we justify pursuing one interest over another? I would argue that it isn’t problematic that Angus et al decided that students who had chosen to take their course (or chosen a path of study for which is was compulsory) should engage with this topic. They had the experience to tell them that this was a beneficial thing to do; the students might not have been equipped to make such a decision themselves, or could have followed unproductive or skewed pathways; they might have wasted a lot of time looking for resources that the instructors knew of or might have gotten hold of something incorrect and continued on with misinformation…most importantly, though, is that we don’t know what we don’t know. That, to me, is what experts are for.

On the other hand, I don’t think this necessarily goes against the arguments we’ve encountered over the last couple weeks. In fact, I think this kind of group situation, where there are acknowledged experts who structure but do not lead, and where the non-experts are responsible to themselves and each other, has the potential to reduce subject-centrality both in an individual’s worldview and in their ‘experiments’. The problem with the interest-pursuit model remains that, as much as one person can attempt to let a topic guide them and de-centre themselves, they are always going to be a subjective self. A group, on the other hand, and especially one with members who have both ‘done it before’ and realised the uniqueness and value of every subsequent ‘doing it again’, is more likely to keep taking its focus away from the self and putting it on the common area of inquiry.

parting shot

In academia, of course, as well as in many other arenas, a good portion of this inquiry is going to lead to…

abstract ideas

very narrowly defined research

rehashings of power relationships

retellings of dichotomied meta-narratives

wikipedia

1 Comments , ,

Hugo

What strikes me about both this automaton and the ‘Turk’ is that the point of them is that they’re not human, and yet they’re made in human likeness and to do uniquely human things. Is this a cul-de-sac of humanism?

Comments Off ,

Horrible Histories – Napoleon Bonaparte vs The Mechanical Turk

A silly but pithy clip about the Mechanical Turk–more extensively explained here: A Point of View: Chess and 18th Century artificial intelligence. On the one hand, it was the non-humanness of the ‘Turk’ that made it internationally famous…but this would have been down to the ingenuity of the creator (if it had been real) and its chess-playing skills were in fact down to a series of bendy masters (its cyborg mates?). In a kind of way, though, the Turk was also a post-human: an amalgam of the creator, the machine, the chess master inside, the chess board and pieces, the rules of the game, and the expectations/perceptions/suspensions of disbelief of the Turk’s opponents and audience. Without one of these, it wouldn’t have existed as…itself!

Comments Off , ,

Journal of Ambient Intelligence and Humanized Computing

I haven’t had a good look at the content in this journal, but I like the fact that it exists–and that it’s both interdisciplinary and crosses academic/practitioner boundaries (terribly post-human…expect that it’s a traditional journal…). Here’s what they cover:
‘…all aspects of ambient intelligence and humanized computing, such as intelligent/smart objects, environments/spaces, and systems…various technical, safety, personal, social, physical, political, artistic and economic issue…’

Comments Off

memes, mind and robots

This brief interview (apparently procured by a journalist pursuing Daniel Dennett out of the lecture theatre and down the road to his lunch appointment, and only taking the hint when the starter arrived) touches on a couple things we’ve been discussing.

One is memes, which Dennett seems to be suggesting rely on a social hierarchy to be passed from the top on down. (I guess for modern times he’s thinking of marketing/advertising, curriculum, political parties, religions here as well as more microcosmic social hierarchies like a workplace or club…I wonder if things like wikipedia would have an effect on this or not…or does wikipedia hold a certain position in the hierarchy of crowdsourced information repositories…?) But he does also point to a spectrum of memes, from ‘mistakes’ akin to genetic mutation (e.g. a malapropism becoming the standard word/phrase) to purposeful meme-creation (e.g. as far as I can tell, anything knowingly created and made public in any way). The crux here is just quickly quoted, but I think is quite important to post-humanism: ‘The mind is the effect, not the cause.’

This is expanded a bit more later in the interview, where Dennett criticises the ‘greedy reductionism’ that equates the brain with the mind, turning them both into unresponsible machines. This certainly echoes Hayle’s problematisation of separating thought from body, but from a slightly different angle. I would interpret this as saying mind, differentiated from brain, is that constantly fluctuating non-subject; the genes of the brain are mostly unchanging, but the memes, the food, the weather, the beauty and horror of the environment are all constantly working on the pliable mind.

And lastly, in a little throw-away bit (when the man clearly wanted his lunch), Dennett confessed to Short Circuit being his favourite AI movie. Why? Because we can’t help but anthropomorphise the robot.

Comments Off , , ,

Week 9 summary

I think I can best summarise my scattered thoughts over the past week with a quote from my undergrad Anthropology professor: ‘Invention is the mother of necessity’.

He was speaking in terms of biological evolution, but this concept quite nicely hints at a nexus joining those much maligned categorisations of the real, the ideal and the virtual.

To start with, the ‘natural’ state of humans is problematised biologically: evolution is all about adaptation to the outside world, regardless of whether is was created by ourselves or not, or whether we want to adapt or not. Moreover, it is driven by anomalies: ‘informational’ mistakes–accidental inventions–that eventually came to be necessary to survival, and necessary to be called human (and not neanderthal, etc.).

From this point of view, the ‘cyborg’ is as unremarkable as a grey moth in Birmingham. The invention process just added another step–there was the accident of a gene that made us capable of considering using animal skin to keep us warm; we just had to enact it. Ditto building shelters, fire, weapons, vehicles, farms, radios, Facebook (more or less). It’s just the complexity of the current invention system that tricks us into thinking that our latest adaptations are ‘unnatural’ or ‘abnormal’.

And the complexity can also be understood in ‘natural’ terms. Dawkin’s ‘meme’ seems to have had a cultural resurgence in helping to define social media phenomena, and is a pithy way of thinking about evolution in post-human terms as the adaptation of a complex [of anything] to inventions external to it (if anything is really external in this sense) and of its own making. Crucially, this is a completely dynamic complex; it doesn’t stop evolving, moving, changing.

I’m arguing here as if ‘post-human’ is a description of how things ‘really’ are, and we’re only just realising it now. I don’t think that’s quite right. First, I’m using the ‘nature’ argument mostly as a challenge to the mystification of the cyborg–not necessarily to say ‘whatever is, is right’. But I’m also aware that a self-satisfied ‘we’ve finally got it all figured out now’ is precisely what post-humanism is not. So I’ll just leave post-humanism as a potentially useful way to perceive…stuff…

Comments Off ,

Information age

I certainly found Hayles more comprehensible than Haraway (which brought me much pleasure in itself…). I ‘ve just highlighted a few points that struck me:

  • The concept of information as disembodied. I’d never thought about this before, particularly to the extreme that Shannon suggested. I think if someone had presented me with this view outside of this context, there’s a 50-50 chance I would have completely accepted it (either in a simplistic way, because info isn’t tactile, or in a semantic way, because it’s only interpretation that makes the various 1s and 0s that form an Excel spreadsheet, for example, ‘informative’–which seems relevant to McKay’s argument); I could have rejected it just as easily on the grounds that Hayles discusses, or because I don’t know enough about neuroscience or computers to decide for myself whether ideas or data files are material or not…! And I think my ambiguity betrays a pervasive cultural attitude about information, whether it’s a both/and idea (sometimes information is material, sometimes it’s not), a typology (this type of information is material, that type is not) or, again, a semantic riddle (e.g. is it only ‘information’ when it is material, but something else when it’s not?).
  • The body as incidental to consciousness. This is of course connected to the first point, but is interesting as a possible link between humanism and posthumanism (p. 4); is Hayles implying toward the end that this is a skeuomorph–a strangely significant leftover from an older philosophy? It doesn’t seem to me that this disembodiment factors into Haraway’s cyborg or Pickering’s posthuman…rather the reverse…
  • The experience of an information/material duality is a culturally limited phenomenon. Hayles points this out clearly, and I think we could take it further to say that much information is only information (I know I’m playing the ‘meaning’ card again, but it seems to keep resurfacing) if it can be translated materially. Impoverished people could be given access to the world’s data on disease cures, agricultural techniques, manufacturing processes…insider stock market tips, my PIN and internet banking password, the PM’s private email address…but without the material objects to make use of this information, or to offer it to people who can, it really is just a collection of blots of ink, 0s and 1s, vibrations of air molecules.
  • But what does all this have to do with education? After blagging my way through an explanation of what I’d been reading this past week, I was confronted by this question. The answer that I pulled agilely out of the hat was that it gives pause to our usual ways of perceiving the entire learning process. To be fair, this goes back to IDEL discussions, and the example of Aristotle’s concern that writing things down spelled (so to speak) the end of good education. But what I pulled from Haraway, Hayles and Pickering was that not only does the typical education process artificially separate fields of study which are really all intertwined, but it presupposes that both these subjects and the students themselves are (as far as learning is concerned) disembodied–Platonic forms of information that can be exchanged and compared, decontextualised, to each other. Examples could range from a spelling test to an economics seminar discussion on stock market fluctuations. In the first case, the actual knowledge being tested is decontextualised, but a part of the cyborg (the dictionary or spell checker) has also been amputated. In the second case, the seminar leader may expect the students to look at socio-cultural factors, the psychology of traders, the history of market economies, etc.–but only in a distilled form in line with the learning outcomes of the course…and, in a live seminar situation, all from the students’ own heads. The simpler version of this is nothing new: would you rather have a doctor who was good at cramming for an exam, or who knew where to look for the latest information?
Comments Off , ,

The World of Tomorrow

I don’t advocate watching this whole video (regardless of what a smaller member of my household might say), but for about three minutes starting here: The World of Yesterday; it’s interesting to see a few ideas that have become ubiquitous in science fiction to the extent that they can be casually referenced in a rather innocuous children’s show.

Apart from the cyborg children, robot guards and time machine, what I’m interested in is the off-hand reference to the ‘information age’. Somehow from the perspective of the future, this seems to beg the question: was there less information in the past? is it just that access to the information was limited, and/or that the information was dispersed? is information manufactured (i.e., does this follow on from the ages of machines and technology?), or is it rather collected? what comes after the ‘information age’?

The manufacturing versus collecting, I think, is particularly pertinent to Hayles’ discussion of the nature of information–did the shopping habits of Tesco customers always exist as as unused information, or did the information only come into being when it was collected via clubcards? And was it always material in the form of the products individuals bought, or did it only take material form when recorded in a computer…or printed out for the marketing department?

Comments Off