Mark Hancock has a nice write-up of a talk given by Mike McCarthy on spoken English. The write-up concludes with an interesting metaphor of a corpus being a corpse, language that is no longer alive. It asks whether only using corpus examples is the best way of trying to improve a learner’s use of English.
Very few corpus folks would suggest only using corpus examples, and furthermore a lot of corpus work goes beyond the purely quantitative to also consider the teaching implications.
They extracted lexical bundles from a spoken corpus of 1.7 million words and then went through those manually to keep only the pedagogically interesting ones. e.g. in other words (kept) vs er this is a (discarded).
Manual review of the list also showed them a hitherto under-emphasized aspect of spoken lectures – the introduction and definition of new terms.
Their analysis split these up into the more transparent but less frequent cues such as call and mean, e.g. …what theorists call.., …what do we mean by… and the less transparent but more frequent cues like basically and essentially e.g. …which are basically…, …so it’s essentially…
Further they also showed how complex the delivery of a lot of the definitions or concepts were i.e. there was a lot of rephrasing sometimes using the word or but many times using no signposting language and key definitions usually came at the end of a series of connected points (back-loading).
In addition they found that often lecturers did not explicitly refer to their power point slides which could make it difficult for students to pick out the key terms.
A corpus may be like a corpse but like on the crime show CSI there is an awful lot that dead bodies can reveal.
This lesson idea is based on what is called the Descriptive Camera, a camera which takes a picture and outputs a description of that picture.
Show students the following picture and say “Tell me something about this?”:
Follow up question – “What else can you say?”
Give them 3 minutes or so to respond. Write up on a board any engineering/interesting lexis.
Show them the next picture:
Ask them to label the above photo with the following:
Beaglebone(embedded Linux platform)
You could also elicit other electronic components seen in the photo e.g. power wire(red and black wires), signal wire (green, yellow, black wire), USB connector, power connector, Ethernet connector, breadboard.
Now divide students into two groups, A & B. Explain that each group will get a different text. Group A’s text will explain what the device is, why it was made and the results of the device. Group B’s text will describe how it works.
Group A text:
The Descriptive Camera works a lot like a regular camera—point it at subject and press the shutter button to capture the scene. However, instead of producing an image, this prototype outputs a text description of the scene. Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.
As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities.
After the shutter button is pressed, the photo is sent to Mechanical Turk for processing and the camera waits for the results. A yellow LED indicates that the results are still “developing” in a nod to film-based photo technology. With a HIT price of $1.25, results are returned typically within 6 minutes and sometimes as fast as 3 minutes. The thermal printer outputs the resulting text in the style of a polaroid print.
The technology at the core of the Descriptive Camera is Amazon’s Mechanical Turk API. It allows a developer to submit Human Intelligence Tasks (HITs) for workers on the internet to complete. The developer sets the guidelines for each task and designs the interface for the worker to submit their results. The developer also sets the price they’re willing to pay for the successful completion of each task. An approval and reputation system ensures that workers are incented to deliver acceptable results. For faster and cheaper results, the camera can also be put into “accomplice mode,” where it will send an instant message to any other person. That IM will contain a link to the picture and a form where they can input the description of the image.
The camera itself is powered by the BeagleBone, an embedded Linux platform from Texas Instruments. Attached to the BeagleBone is a USB webcam, a thermal printer from Adafruit, a trio of status LEDs and a shutter button. A series of Python scripts define the interface and bring together all the different parts from capture, processing, error handling, and the printed output. My mrBBIO module is used for GPIO control (the LEDs and the shutter button), and I used open-source command line utilities to communicate with Mechanical Turk. The device connects to the internet via ethernet and gets power from an external 5 volt source, but I would love to make a another version that’s battery operated and uses wireless data. Ideally, The Descriptive Camera would look and feel like a typical digital camera.
After each group has finished reading ask them to find someone from the other group to explain in their own words their text. Tell them that people from Group A should start the exchange. Also tell them that Group A will need to ask Group B to explain to them two things – the word Mechanical Turk and the abbreviation HIT.
Monitor and feedback as necessary.
Then get the groups to swap their text, each now reads the new text and writes 3 comprehension questions. The groups now find a +new+ person from the other group to ask the questions to.
Again monitor and feedback as necessary.
Various lexis could be followed up e.g. ask the students if they know what GPIO is and if they can point it out in the second photo above.
Additionally the following video (up to the 3:44 mark) could be shown:
Example video comprehension questions: What additional reason did the inventor give for developing the prototype? What extra information did you hear from the video?
Various extensions could be done e.g. students can find out more about Amazon’s Mechanical Turk, the origins of the word. Or a discussion on whether students would buy such a device if commercialised. Or get students to describe the three photos shown at Descriptive Camera themselves.
Engineering Connections with presenter Richard Hammond is a BBC series (though originally on National Geographic Channel) about general engineering. Each episode looks at an engineering structure/technology as developed from earlier/other technologies.
The sometimes surprising connections lends itself naturally to an engaging lead-in. So for the example of the Formula One (F1) racing car episode (Series 3 Episode 2) one can project the following phrases taken from the introduction of the episode:
A revolution in artillery
A new design for a jet engine
An ancient boat
and ask students how these five things are related. After some time for responses and letting the students know that they are all related to a F1 race car, there is inevitably a lot of curiosity about how some of the 5 things could be related. The lead-in could be extended to include a discussion of the various hypotheses students may have.
Students are then put into 5 groups and assigned one of the five connections to watch and take notes on. They need to be able to explain to the class afterwards the details of how their connection is related to the topic of the episode. If you have fewer than 5 groups, each group could be assigned to more than one connection or one of the connections could be seen by all groups.
The length of each connection in an episode averages to about 8 mins (a typical episode is generally about 50 mins in total).
The class discussion involves a lot of language related to engineering lexis as well as general English lexis. And even more importantly students are motivated enough to get their classmates to explain more clearly their feedback allowing them to practice concept checking questions, rephrasing etc.
The only downside is that I don’t personally like Richard Hammond and in some of the series his inane grinning can grate!
I had been struggling to find an interesting and modern example to use to demonstrate the passive voice in writing about processes. Previously I had used a ‘how to make an X-wing fighter from two Paris metro tickets’ which turned out marginally better than using wine-making as the process!
So I was glad to see a tweet (hat tip @chadsansing) which led me to an article on a project that turned a set of library steps into a giant game. And as a bonus the text accompanying the video used the passive voice. Authentic, interesting text, not made-up, stiff, out of date prose!
one could start by asking – What do you think is happening here? This would then lead onto eliciting various vocabulary needed for the writing task – stairs, tin cans, (tennis) ball, balloons, game, etc.
Before, during and after photos could then be used to encourage thinking about the procedure which goes from an empty staircase to a game via electronics and collaborative work.
This video of the event could be shown next:
Students would be told to write up a report of the event as if they were a journalist for a newspaper.
Finally a gap fill could be given of an actual write-up:
Four flights of seventy-two stairs ____ _______ into a giant game board using 1,200 feet of wire and 48 Internet-connected tin cans _______ with green and gold helium balloons at DIY: Physical Computing at Play. These were our targets.
The customized game ____ _______ after we invited designers and web developers Michael J. Newman and Scott Hutchinson to Kennedy Library at Cal Poly in San Luis Obispo, to present at Science Café, an ongoing community series. They made a couple of drives up from L.A. to find inspiration and check out our Brutalist building. The two saw our dramatic stretch of concrete stairs and knew they’d found their game board.
At the event, participants built and tested simple circuits then rigged our staircase using the wire, cans and balloons. Then we aimed and threw tennis balls down the stairs, hoping to knock over the cans, which acted as live switches on foil tape.
Cans _____ _______ to a breakout box by 25’ wires, and a live site updated the score whenever a can from either the green or gold team ____ _____ ____. Working with the library’s IT group, the site ___ ______ on digital displays throughout the building as well as on participants’ mobile devices. Cal Poly linked to the scoring site from the university’s home page.
Use these words: to be (x5), transform, decorate, conceive, attach, knock over, share
As an additional activity one could use the following interview as a listening quiz:
The scale of the universe is an amazing interactive animation showing the universe from the smallest to the largest. It was created by 14-year-old twin boys.
I decided to use it in a reading activity alongside trialling the use of an online whiteboard.
As students explored the scale of the universe they had to note down
5 new things they discovered
make notes on 10 objects
write down 10 new words they met
I told my first group to write the above into an online whiteboard – DabbleBoard (now defunct, see picture below, names removed to protect the hopeless).
It turned out that I should have advised them to first open up a notepad, use that and then copy paste into whiteboard. Since they had difficulty entering text directly and which would disappear occasionally.
Another issue to be aware of is the temptation for them to fool around drawing over their classmates words and such like. Although this is the other side of the coin of using such a tool.
I am not sure if I would use an online whiteboard for a reading activity again. I plan to try it with a video listening activity where students would invent some comprehension questions, write them on the whiteboard and then try to answer their classmates’ questions.
(Dabbleboard reading activity)
Additional note – a good thing about Dabbleboard is that you don’t need to invite users by email, guests can just go to web link for the whiteboard, this saves the need to collect emails.
Dabbleboard is somewhat buggy and you risk losing drawings, so I cannot recommend it for now. I guess I will go back to Google docs! If anyone can recommend a good online whiteboard which doesn’t require participants to login let me know!
Recently did this activity again and recorded the shared Google document groups used to answer the task questions (revised questions to two). The recording below is of a low intermediate group.
Nathan Hall (@nathanghall) has been writing about online collaboration tools which you can read about here and here.
The very best teachers have a magical ability to help students with vocabulary, they can effectively go beyond one or two aspects of a word to maybe cover four or five. But even the superteachers, I doubt, can cover say twelve aspects (Shaw 2011). This is where corpora come in handy.
A corpus is a database of text of everyday language. This database is searchable which makes it useful to language teachers and learners.
In one of my classes students had some difficulty with the word bandwidth. I had set this as one of ten words to find the meanings of the week before. During a vocabulary game bandwidth was used and it seemed to stump most. And even the ones who managed to come up with a reasonable definition were still frowning over their understanding of the word. Meanwhile all I could add to help them was to give them some collocates (actually only two ‘high’ and ‘low’).
I have been pondering using corpora in my classes so thought a blog post may clarify some possible uses.
Fortunately a new interface to the Corpus of Contemporary American English was released sometime in January 2012. This interface is called Word and Phrase.info.
Plugging bandwidth into the word search bar gives me the following screen:
(wordandphrase.info result for bandwidth, click image to enlarge)
1. Shows the wordnet definition.
2. Shows collocates and surprisingly (or not as may be the case) my use of ‘low’ as a collocate is not listed.
3. Shows the frequency in the five registers.
4. Shows examples of the word as it appears in the texts in the corpus.
Immediately one can see in one screen a wealth of interesting information. For example bandwidth has no synonyms, it is most frequent in the academic context, high is the most common collocate followed by available.
If I had been able to show that in class it would have been great (or not if this was the first time they had heard of a corpus!).
I think this post has clarified a bit my thinking on corpora and maybe helped a reader or two of this blog.
For a great series on corpora check out Jamie Keddie at onestopenglish.com (apart from the introduction you do need to be a member to access the rest of the series).