The thermodynamics of glittens, mini-narrow reading and collaborative writing

The ever resourceful Rachael Roberts/@teflerinha recent post on collaborative writing got me thinking about the idea generation phase of doing such writing. Storch (2005) found that most of the time was spent in this phase of the process by participants in their small study. As time is always tight, a way to short-circuit this so students can get to the actual writing is desirable. Krashen (2004) argues that using texts which are related is an efficient reading method.

So why not employ a mini-narrow reading where you have a small number of related text that students would read. Their task is to make notes on their text, exchange the info they have then write a paragraph together describing what they had read and the relationship between the texts.

This offers a way to bypass the long idea generation phase. I tried this recently using these two texts – Science proves that you should wear glittens; Branching in biology animation.

The resulting engagement does of course depend on the texts one chooses. In this case I can confidently say that the students were into the task. I would have liked more time to explore their thoughts more directly though.

You can see some of the development in the written work in the following video:

Thanks for reading and if you have any collaborative writing tips let me (or Rachael) know.

References:

Krashen, S. (2004). The Case for Narrow Reading. Language Magazine 3(5):17-19. Retrieved from http://www.sdkrashen.com/articles/narrow/all.html

Storch, N. (2005). Collaborative writing: Product, process, and students’ reflections. Journal of Second Language Writing, 14(3), 153-173.

Runaround – 50 British inventions, scan reading

The Radio Times* is running a poll to find out what people think are the best British inventions, you can find the list here. The short texts are ideal for scan type reading activity.

I used a running quiz format, where students in pairs/threes took turns to run to where the lists were laid out on a table and return with the correct answer without taking the questions with them. I dictated the questions which they had to write down and also gave them 15 mins to do the activity, you can of course adjust this depending on students’ level.

Questions with answers:

1. How many inventions happened before the 20th century? 32
2. How many vacuum tubes were used for the Colossus computer? 1500
3. What and when was the most recent invention? Steri-spray in 2008
4. What was the key invention for the textile industry? Spinning Frame 1768
5. How many inventions are not products but ways of doing things?** 3, Float glass,  Bessemer Process, Cement
6. What percentage of today’s power stations use steam? 75 percent(3/4)
7. Who invented something while locked up? William Addis , Toothbrush, 1770
8. What invention relied on a previous invention by a French person? Tin Can
9. Who thought of an invention whilst in the bath? John Shepard Barron, ATM
10. What invention helped to catch a criminal? Electric telegraph

For some questions I also awarded bonus points if they could answer with any further details e.g. Q4 the year the spinning frame was invented.

You can use the list in a cleaned up version in .odt format 50 British inventions if you don’t want any of the pictures from the Radio Times link.

At the time of this post Steri-Spray is leading at 27%.

Hope you enjoy the acitivity and don’t forget to add your vote!

* Shame about the commercial fundamentalism promoted in the  commentary by Michael Mosely and the tired presentation that invention is an individual act.

** For question 5 I accepted that carbonating water could be seen as a process so the answer would be 4 processes including Soda water invention.

Language Point lesson – Place hacking London Olympic Shard

I heard about Language Point via Twitter (HT @philwade), a resource site for lessons in English, French, Spanish, German run by @Marie_Sanako. It is a good outlet for teachers wanting to promote their lesson plans. The upload procedure is very easy (though I have only tried it with a text file so far).

At the moment  topical Olympic related content is being asked for, so if you have got a such a lesson why not put it on Language Point?

If they get 400 people registered on the site by 15 May, one member could get 50 Euros of Amazon vouchers.

You can find my Olympic related lesson here.

Update:

Woot! My lesson has been made a feature item for the week!

Point and click and describe – a lesson idea for engineering students

This lesson idea is based on what is called the Descriptive Camera, a camera which takes a picture and outputs a description of that picture.

Show students the following picture and say “Tell me something about this?”:

Follow up question – “What else can you say?”

Give them 3 minutes or so to respond. Write up on a board any engineering/interesting lexis.

Show them the next picture:

Ask them to label the above photo with the following:

  • Beaglebone(embedded Linux platform)
  • Thermal printer
  • Status LEDs
  • USB webcam

You could also elicit other electronic components seen in the photo e.g. power wire(red and black wires), signal wire (green, yellow, black wire), USB connector, power connector, Ethernet connector, breadboard.

Now divide students into two groups, A & B. Explain that each group will get a different text. Group A’s text will explain what the device is, why it was made and the results of the device. Group B’s text will describe how it works.

Group A text:

The Descriptive Camera works a lot like a regular camera—point it at subject and press the shutter button to capture the scene. However, instead of producing an image, this prototype outputs a text description of the scene. Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.

As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities.

After the shutter button is pressed, the photo is sent to Mechanical Turk for processing and the camera waits for the results. A yellow LED indicates that the results are still “developing” in a nod to film-based photo technology. With a HIT price of $1.25, results are returned typically within 6 minutes and sometimes as fast as 3 minutes. The thermal printer outputs the resulting text in the style of a polaroid print.

Matt Richardson, Descriptive Camera.

Group B text:

The technology at the core of the Descriptive Camera is Amazon’s Mechanical Turk API. It allows a developer to submit Human Intelligence Tasks (HITs) for workers on the internet to complete. The developer sets the guidelines for each task and designs the interface for the worker to submit their results. The developer also sets the price they’re willing to pay for the successful completion of each task. An approval and reputation system ensures that workers are incented to deliver acceptable results. For faster and cheaper results, the camera can also be put into “accomplice mode,” where it will send an instant message to any other person. That IM will contain a link to the picture and a form where they can input the description of the image.

The camera itself is powered by the BeagleBone, an embedded Linux platform from Texas Instruments. Attached to the BeagleBone is a USB webcam, a thermal printer from Adafruit, a trio of status LEDs and a shutter button. A series of Python scripts define the interface and bring together all the different parts from capture, processing, error handling, and the printed output. My mrBBIO module is used for GPIO control (the LEDs and the shutter button), and I used open-source command line utilities to communicate with Mechanical Turk. The device connects to the internet via ethernet and gets power from an external 5 volt source, but I would love to make a another version that’s battery operated and uses wireless data. Ideally, The Descriptive Camera would look and feel like a typical digital camera.

Matt Richardson, Descriptive Camera.

After each group has finished reading ask them to find someone from the other group to explain in their own words their text. Tell them that people from Group A should start the exchange. Also tell them that Group A will need to ask Group B to explain to them two things – the word Mechanical Turk and the abbreviation HIT.

Monitor and feedback as necessary.

Then get the groups to swap their text, each now reads the new text and writes 3 comprehension questions. The groups now find a +new+ person from the other group to ask the questions to.

Again monitor and feedback as necessary.

Various lexis could be followed up e.g. ask the students if they know what GPIO is and if they can point it out in the second photo above.

Additionally the following  video (up to the 3:44 mark) could be shown:

Example video comprehension questions: What additional reason did the inventor give for developing the prototype? What extra information did you hear from the video?

Various extensions could be done e.g. students can find out more about Amazon’s Mechanical Turk, the origins of the word.  Or a  discussion on whether students would buy such a device if commercialised. Or get students to describe the three photos shown at Descriptive Camera themselves.

Online whiteboard to enhance reading activity

The scale of the universe is an amazing interactive animation showing the universe from the smallest to the largest. It was created by 14-year-old twin boys.

I decided to use it in a reading activity alongside trialling the use of an online whiteboard.

As students explored the scale of the universe they had to note down

  1. 5 new things they discovered
  2. make notes on 10 objects
  3. write down 10 new words they met

I told my first group  to write the above into an online whiteboard – DabbleBoard (now defunct, see picture below, names removed to protect the hopeless).

It turned out that I should have advised them to first open up a notepad, use that and then copy paste into whiteboard. Since they had difficulty entering text directly and which would disappear occasionally.

Another issue to be aware of is the temptation for them to fool around drawing over their classmates words and such like. Although this is the other side of the coin of using such a tool.

I am not sure if I would use an online whiteboard for a reading activity again. I plan to try it with a video listening activity where students would invent some comprehension questions, write them on the whiteboard and then try to answer their classmates’ questions.

Dabbleboard reading activity
Dabbleboard reading activity

(Dabbleboard reading activity)

Additional note –  a good thing about Dabbleboard is that you don’t need to invite users by email, guests can just go to web link for the whiteboard, this saves the need to collect emails.

Update 1:

Dabbleboard is somewhat buggy and you risk losing drawings, so I cannot recommend it for now. I guess I will go back to Google docs! If anyone can recommend a good online whiteboard which doesn’t require participants to login let me know!

Update 2:

Recently did this activity again and recorded the shared Google document groups used to answer the task questions (revised questions to two). The recording below is of a low intermediate group.

Update 3:

Nathan Hall () has been writing about online collaboration tools which you can read about here and here.