Improving working conditions – isn’t that in the remit of teacher associations?

The title paraphrases a somewhat incredulous question from a teacher attending the open forum on working conditions at the TESOL France colloquium last Saturday (18 November 2017). She assumed that TESOL France like its sister organisation in the US are already pushing teachers concerns over the conditions of their work. I think she asked this after I mentioned TaWSIG (Teachers as Workers Special Interest Group, www.teacherswasworkers.org).

And that is precisely what the founders of TaWSIG tried to advance with IATEFL to no success. Teacher associations such as IATEFL & TESOL France do sterling work in many areas. And regarding the issue of working conditions, TESOL France supported the survey conducted in 2014 that reported in numbers the cold reality of some aspects of private language school conditions. The survey showed teachers “typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating”. (http://www.eflmagazine.com/tailors-not-rich-salaries-conditions-elt-trainers-france/)

The open forum showed how such frustrations reproduces in other parts of Europe using interviews with a co-operative from Spain (www.slbcoop.com), a teacher’s advocacy group from Ireland (eltadvocacy.wordpress.com) and a working conditions information network from the UK (teflguild.wordpress.com).

The following url will take you to the presentation slides with audio [http://media.englishup.me/tesolfrance-2017/assets/player/KeynoteDHTMLPlayer.html].

The attendees wrote down some of their frustrations:

The discussion got lively enough that there was what one attendee described as a stage invasion.

One of the outcomes we wanted with the forum was to get a mini-committee together to meet regularly, we hope this will pan out in the coming weeks.

Unfortunately the hour was not enough time to discuss more ideas to address working condition frustrations though I think people appreciated the role of imagination in this area. This was illustrated in the slides via the example from ELT Advocacy Ireland – from a very simple idea of mapping schools in Dublin to the writing of an underground teacher fanzine.

On the topic of imagination in tactics here is a delightful example from US organiser Saul Alinksy describing an idea that was related to organising a black community against the Eastman Kodak company, in Rochester, New York:

Another idea I had that almost came to fruition was directed at the Rochester Philharmonic, which was the establishment’s — and Kodak’s — cultural jewel. I suggested we pick a night when the music would be relatively quiet and buy 100 seats. The 100 blacks scheduled to attend the concert would then be treated to a preshow banquet in the community consisting of nothing but huge portions of baked beans. Can you imagine the inevitable consequences within the symphony hall? The concert would be over before the first movement — another Freudian slip — and Rochester would be immortalized as the site of the world’s first fart-in. (http://stonestreetpress.com/1916/saul-alinskys-flatulent-blitzkrieg-his-own-account-of-his-famous-fart-in-taking-on-eastman-kodak-in-rochester-and-winning/)

Note the above was never carried only imagined. But this is the sort of imaginative tactics needed.

Lastly much appreciation and gratitude to all the volunteers who made the TESOL France 2017 Colloquium happen. See you next year!

 

Advertisements

Grassroots language technology: Wiktor Jakubczyc, vocab.today

It’s been a while since the last post on teachers doing it for themselves technology wise. Do check those out if you have not or need a reminder. The teacher/developer who kindly answered questions for this post, Wiktor Jakubczyc, I stumbled across when looking for a github source on vocabulary profilers. And what a find his github pages are.

I think there are good reasons for teaching and education to have a default “inertia” regarding “innovation” (which Wiktor laments in one of his responses) but I won’t discuss this here. Maybe readers may prod me on this in the comments? 😁 I would like to refer to a (pdf) point I’ve made before – that there is a middle ground for teachers to explore regarding grassroots technology.

Anyway enough of my rambling here’s Wiktor and there is a marvelous bonus at the end for all you CALL geeks:

1. Can you explain your background a little?

I’m an English teacher with over 10 years of experience and an IT freelancer. I’ve taught English all over Europe, in London, Moscow, Warsaw, Bratislava, Sevilla and Wrocław, my home town in Poland. Since I was a kid I’ve loved computers – and that was in the ’80s when an Atari couldn’t really do very much. I passionately want teachers to make the most of digital technologies.

2. What was the first tool you designed for learning languages?

The first tool I designed to help students learn English was a dictionary lookup program for Windows, way back in 2007. Back then, there were good dictionaries you could get for your computer, but I wanted to be able to look up a word in many dictionaries at once. That option simply didn’t exist, so I created The Ultimate Dictionary (http://creative.sourceforge.net) . I got great feedback from my students, fellow teachers and friends – they still use it, and they love it! It’s a very rewarding feeling to create something of value for other people, and to be able to give it to them for free.

A few years later, I discovered that another developer, Konstantin Isakov, had the same idea and made an even better dictionary application – GoldenDict. I used his source code as the base for a redesign of my dictionary, now called Nomad Dictionary. Nomad Dictionary now has Windows, Android and MacOS editions, all available to download at http://dictionaries.sf.net.

My second project was a Half a Crossword creator. Half a crossword is a type of communicative activity for ESL classrooms which emphasizes speaking and vocabulary, two key skills in speaking a language. Students get half a crossword each, split evenly between two students, and have to ask each other for missing information and give definitions for the words they have in their crossword. It’s a fantastic way to revise and recycle vocabulary, while practicing the much-needed skills of asking for and giving information. And students love it!

Again, no such tool existed, which is why I decided to create one. I first made a version of Half a Crossword for Windows (http://creative.sourceforge.net) because at the time Delphi was the only language I could program in. I found it immensely useful in my classes – it was a perfect activity to check how many words students knew before moving on to new material. I tried to get other teachers involved, to spread the word and encourage them to use it, but I found a lot of people were resistant. They loved the idea, but few actually decided to use it in their classrooms.

A few years later, thinking that maybe the problem was accessibility – you needed to download a program, install it, write a wordlist in word and then save it… it was a bit complicated – I decided to create an online version written in JavaScript. I posted the code for Half a Crossword Online on GitHub (https://github.com/monolithpl/half-a-crossword). Despite the fact that it wasn’t advertised anywhere, quite a few people found out about it, and two people even contributed code! Teachers I talked to also found the online version easier to use, and came to use them with their classes.

3. What do you think of as a relevant tool?

That’s a very good question, which is to say a very hard question. I think a relevant tool has to be both personally important enough for the creator to design it (especially if it’s a hobby project) at the same time good enough so that other people later also find it useful to them. It’s rare for these two things to coincide.

Another difficulty lies in the fact that the world of teaching, broadly speaking, is averse to innovation. Very few teachers care to experiment with new methodologies, paradigms or teaching tools. There’s extreme inertia. So getting teachers to change their habits and try something new is very challenging, especially when it comes to technology.

Relevant tools, in my mind, would be those that embrace the DOGME/Teaching Unplugged methodology, the Lexical Approach, personalized teaching, the explosion of mobile computing, just to name a few – all the radical new ideas that have appeared in the last 10 years in language teaching. And they would have to be loved by students, teachers and administrators alike.

4. Do you create tools for languages other than English?

I would love to, someday. I simply don’t have the time to do that now. This is a hobby, after all. The language learning tools I create are useful to my students, my colleagues and myself in learning and teaching English, which is what we do everyday. So that is the priority for now.

I hope other people around the world will find the time and be inspired to create tools for their languages. Unfortunately, there is a huge gap between the English-speaking world and the rest of the people out there when it comes to technology: just compare the size of the English Wikipedia versus editions in other languages. The same is true for language data: there are far fewer corpora, frequency wordlists, audiovisual materials etc for languages other than English. There’s lots of catching up to do.

I also think that the world needs a world language, so that we can all start to understand things not just around us, in our local environment, but on a more global level. For that, we need English, so I can understand why most of the interesting developments in language teaching are designed for English students. It’s simply the largest market and user base.

5. What tools are you working on at the moment? What do you have planned for future developments?

Right now I’m working on projects related to wordlists. I have a new version of a Vocabulary Profiler (https://github.com/monolithpl/range.web) almost ready. It’s an app that visualizes word frequency in a text, or in more practical terms tells a teacher how difficult a text is and which words are going to be most challenging for their students. Developing it was an incredible learning experience as I had to figure out how to compress large wordlists so that the app could work on mobile phones and discovered trie algorithms, which are a super clever concept of packing words into a small space. I’d like to mention the groundbreaking work of Paul Nation on teaching and researching vocabulary, especially his Range program (https://www.victoria.ac.nz/lals/about/staff/paul-nation#vocab-programs), which I tried to recreate for the modern web.

My most ambitious project to date is an extension of this work – it’s an app to highlight collocations, chunks etc. in a text called Fraze Finder (https://github.com/monolithpl/fraze-finder). It takes the concept of profiling vocabulary to the next level by analyzing multi-word elements, like phrasal verbs, which students most often struggle with. The idea is to help students and teachers notice collocations, to identify them and understand their importance in written and spoken language. The difficulty here is building a good library of these expressions and accurately finding them (with all their variations) in texts. I have lots of ideas for future projects, which I’ve tried to gather together on my personal website vocab.today (https://vocab.today/teacher). I hope one day to complete them all!

6. Are there any tools (not yours) that you yourself use for learning languages?

Over the years, I’ve tried and experimented with dozens of language learning solutions. Let me focus on three main areas:

Language Management Systems (LMSs) – these are content delivery platforms, basically, websites where teachers upload material for their classes and students do their homework, complete tests, review their progress and exchange messages with one another.

I gave Moodle a try, but it was just horrible to use for both teachers and students, and I think other people agreed with me for it seems to be fading away into a well-deserved oblivion.

Later, I tried Edmodo, which was a lot easier to use, and obviously inspired by Facebook, which was just starting to be the big thing at the time. I ran into numerous limitations using it, and finally, out of sheer frustration, just gave up. It was very pretty on the surface, but you couldn’t do much with it. And students prefered to use Facebook for their day-to-day communication, so it was difficult to make them use something else.

So today, I create Facebook groups for my students and use Google Drive, Forms and Docs to share documents and tests. It’s still not a perfect solution, but it has the advantage of being familiar to everyone and easy to use. Unlike the many solutions I’ve used before, I think these are versatile enough to do the job and are actively being developed and improved.

Flashcards – There are hundreds of apps and websites that help students learn through flashcards. I’ve tried many of them with my students, including Anki (which is a great piece of software). However, I’ve found that Quizlet is the most easy to set up and easy to use. And there’s a huge library of flashcards made by talented teachers around the world available for anyone to use. It’s quite amazing, and it’s free.

Mobile Apps – I’ve also experimented with several dozen different learning tools for mobile phones. This is a very new market, as the iPhone only came out ten years ago. There is currently much hype around apps like DuoLingo, Babbel or Memrise, but personally I found them to be quite boring. The activities are very repetitive, and apart from situations where I would be forced to use them (on a crowded train with nothing else to do), I can’t imagine myself ever using them long-term.

This is still a very experimental field, which is why I find it shocking that the three biggest apps offer just two types of activities: multiple choice or fill-in-the-gap exercises. I would love to see more variety. There’s also the fact that due to their novelty, the claims of effectiveness these apps advertise with is often greatly overstated – just see what happened to all the “brain training” apps like Lumosity which now have to pay multi-million dollar fines for lying to their customers (https://arstechnica.com/science/2016/06/billion-dollar-brain-training-industry-a-sham-nothing-but-placebo-study-suggests/). There’s definitely room for improvement.

7. Any advice for people interested in learning to design such tools?

The most important thing is to have an idea on what to create: something that would be useful for you or your students that doesn’t yet exist, a faster and better way of doing something you do every day or a radical improvement on a tool or solution you currently use.

Programming skills are secondary and you can always find people who can help you out with technical stuff on StackOverflow. I’ve met a few programmers who after completing their studies had no idea what they wanted to create. Knowing what you’d like to create is the key.

It’s much easier to get into hobby development than it was 5 or 10 years ago. GitHub makes it super easy to upload your code and create a website for your project – all for free! It’s also a great way to discover other projects, make use of ready-made components and participate in the open source community by commenting or finding bugs.

JavaScript is one of the easiest programming languages you can learn, and it’s everywhere – on PCs, Macs, iPhones and Androids. With just one language, you can design for almost any device out there – the developments on the technological front are simply amazing.

On the teaching side, I could recommend no better than Scott Thornbury’s excellent article How could SLA research inform EdTech? (https://eltjam.com/how-could-sla-research-inform-edtech) which describes the needs of language learners and offers a list of requirements that should be met in order to create a truly excellent, cutting-edge language learning tool. To my knowledge, no such tool exists. Not by a long shot. It’s a great opportunity for creative minds.

8. Anything you want to add?

Thank you for noticing my work and giving me an opportunity to speak about it. Up until now I’ve been working on my projects almost in secret. It would be amazing if this interview inspired creative young minds to design new tools for language teaching, especially in languages other than English. I hope teachers will discover new tools that will help them teach better with less effort.

Technology has so much to offer in the field of learning languages, and there’s so much innovation to come. I’m looking forward to the bold new ideas of the future. Follow my work at vocab.today or on github!

Many thanks to Wiktor for spending time answering these questions. And here is the bonus link – Wiktor is compiling classic CALL programs that you can run in your browser, how awesome is that?! I am sure Wiktor would be glad to take some suggestions of some classic gems.

Finding relative frequencies of tenses in the spoken BNC2014 corpus

Ginseng English‏ @ginsenglish issued a poll on twitter asking:

This is a good exercise to do on the new spoken BN2014 corpus. See instructions to get access to the corpus.

You need to get your head around the parts of speech (POS) tag. The BNC2014 uses CLAWS 6 tagset. For the past tense we can use past tense of lexical verbs and past tense of DO. Using the past tenses of BE and HAVE would also pull in their uses as auxiliary verbs which we don’t want. This could be a neat future exercise in figuring out how to filter out such searches. Another time! Onto this post.

Simple past:

[pos=”VVD|VDD”]

pos = part of speech

VVD = past tense of lexical(main) verbs

VDD = past tense of DO

| = acts like an OR operator

So the above look for parts of speech tagged as either past tense of lexical verbs or past tense of DO.

Simple present

The search term for present simple is also relatively simple to wit:

pos=[“VVZ”]

VVZ     -s form of lexical verb (e.g. gives, works)

Note the above captures third person forms, how can we also catch first and second person forms?

Present perfect

[pos = “VH0|VHZ”] [pos =”R.*|MD|XX” & pos !=”RL”]{0,4} [pos = “AT.*|APPGE”]? [pos = “JJ.*|N.*”]? [pos =”PPH1|PP.*S.*|PPY|NP.*|D.*| NN.*”]{0,2} [pos = “R.*|MD|XX”]{0,4} [pos = “V.*N”]

The search of present perfect may seem daunting; don’t worry the structure is fairly simple, the first search term [pos = “VH0|VHZ”] is saying look for all uses of HAVE and the last term [pos = “VVN”] is saying look for all past participles of lexical verbs.

The other terms are looking for optional adverbs and noun phrases that may come in-between namely

“adverbs (e.g. quite, recently), negatives (not, n’t) or multiword adverbials (e.g. of course, in general); and noun phrases: pronouns or simple NPs consisting of optional premodifiers (such as determiners, adjectives) and nouns. These typically occur in the inverted word order of interrogative utterances (Has he arrived? Have the children eaten yet?)” – Hundt & Smith (2009).

Present progressive

[pos = “VBD.*|VBM|VBR|VBZ”] [pos =”R.*|MD|XX” & pos !=”RL”]{0,4} [pos = “AT.*|APPGE”]? [pos = “JJ.*|N.*”]? [pos =”PPH1|PP.*S.*|PPY|NP.*|D.*| NN.*”]{0,2} [pos = “R.*|MD|XX”]{0,4} [pos = “VVG”]

A similar structure to the present perfect search. The first term [pos = “VBD.*|VBM|VBR|VBZ”]  is looking for past and present forms of BE and the last term [pos = “VVG”] for all ing participle of lexical verb. The terms in between are for optional adverb, negatives and noun phrases.

Note that all these searches are approximate – manual checking will be needed for more accuracy.

So can you predict the order of these forms? Let me know in the comments the results of using these search terms in frequency per million.

Thanks for reading.

Other search terms in spoken BNC2014 corpus.

Update:

Ginseng English blogs about frequencies of forms found in one study. Do note that as there are 6 inflectional categories in English – infinitive, first and second person present, third person singular present, progressive, past tense, and past participle, the opportunities to use the simple present form is greater due to the 2 categories of present.

References:

Hundt, M., & Smith, N. (2009). The present perfect in British and American English: Has there been any change, recently. ICAME journal, 33(1), 45-64. (pdf) Available from http://clu.uni.no/icame/ij33/ij33-45-64.pdf

Classified and Identified – A pedagogical grammar for article use

1990 was a good year for music  – Happy Mondays, Stone Roses, Primal Scream, James, House of Love. 1990 was also good for what is, in my humble opinion, one of the best pedagogical grammars for article instruction – Peter Master’s paper Teaching the English Articles as a Binary System published in TESOL Quarterly.

It is a pedagogical grammar because it simplifies the four main characteristics of articles definiteness[+/-definite], specificity[+/-specific], countability[+/-count] and number[+/-singular] into two bigger concepts namely classification and identification. So 0 or no article and a/an is used to classify and the used to identify.

As discussed in a previous post the two main features of articles are definiteness and specificity. So the four possible combinations are:
1a. [-definite][+specific] A tick entered my ear.
b. [-definite][-specific] A tick carries disease.
c. [+definite][+specific] The computer is down today.
d. [+definite][-specific] The computer is changing our lives

Master’s binary scheme emphasizes 1b and 1c at the expense of 1a and 1d. That is +identification feature describes [+definite][+specific] and -identification or classification describes [-definite] [-specific].

The effect of ignoring specificity in indefinite uses is saying all uses of no article or a/an is essentially generic. Whether we mean a specific, actual tick as in 1a or a generic one as in 1b we still classify that tick when using the article a. Paraphrased as something that can be classified as a tick entered my ear/carries disease.

The effect of ignoring specificity in definite uses is saying that all uses of the are essentially specific. Although the difference between 1c and 1d is significant we can rely on the fact that generic the is relatively infrequent. Further some argue that generic the is not very different from specific the. The identified quality of a generic noun like the computer is held onto. We do not classify one-of-a-group for computer until we interpret the rest of the sentence. And when we understand the noun as requiring a generic interpretation we seem to see such interpretation through the individual. So generic the is considered as “the identification of a class

Master goes on to give some advice of teaching classification. For instance,  have students sort a pile of objects into categories – These are books/These are pencils/This is paper/This is a pen.

For identification have students identify members in the categories – This is the blue book/These are the red pencils/This is the A4 paper/This is the new pen.

In addition teach them that proper nouns, possessive determiners (my, her), possessive ’s (the girl’s), demonstratives (this, that) and some other determiners (e.g. either/neither,each, every) —> identify; while no article , a/an, and determiners such as some/any one —> classify.
Countability only needs to be considered for classified nouns as identified nouns require the whether they be countable or not.

Master then provides the following chart:

After the concepts of classification and identification are presented and practiced details of use can be shown as in the table below:

Master-2002
From Master, 2002

I won’t repeat what Master says as I have already done too much of that. Once you read Master’s paper the two figures can be used as a memory aid.

Master says that discourse effects of article use (e.g. given/theme and new/rheme) can be matched onto his binary schema i.e. given info is identification and new info is classification. And that for many noun phrase uses of article such as ranking adjectives, world shared knowledge, descriptive vs partitive of phrases, intentional vagueness, proper nouns and idiomatic phrases there is no need to go beyond the sentence unless first/subsequent mention is a involved.

Thanks for reading.

References:

Master, P. (1990). Teaching the English articles as a binary system. Tesol Quarterly, 24(3), 461-478.
Master, P. (2002). Information structure and English article pedagogy. System, 30(3), 331-348.

Successful Spoken English – interview with authors

The following is an email interview with the authors, Christian Jones, Shelley Byrne, Nicola Halenko, of the recent Routledge publication Successful Spoken English: Findings from Learner Corpora. Note that I have not yet read this (waiting for a review copy!).

Successful Spoken English

1. Can you explain the origins of the book?

We wanted to explore what successful learners do when they speak and in particular learners from B1-C1 levels, which are, we feel, the most common and important levels. The CEFR gives “can do” statements at each level but these are often quite vague and thus open to interpretation. We wanted to discover what successful learners do in terms of their linguistic, strategic, discourse and pragmatic competence and how this differs from level to level.  

We realised it would be impossible to use data from all the interactions a successful speaker might have so we used interactive speaking tests at each level. We wanted to encourage learners and teachers to look at what successful speakers do and use that, at least in part, as a model to aim for as in many cases the native speaker model is an unrealistic target.

2. What corpora were used?

The main corpus we used was the UCLan Speaking Test Corpus (USTC). This contained data from only students  from a range of nationalities who had been successful (based on holistic test scoring) at each level, B1-C1. As points of comparison, we also recorded native speakers undertaking each test. We also made some comparisons to the LINDSEI (Louvain International Database of Spoken English Interlanguage) corpus and, to a lesser extent, the spoken section of the BYU-BNC corpus.

Test data does not really provide much evidence of pragmatic competence so we constructed a Speech Act Corpus of English (SPACE) using recordings of computer-animated production tasks by B2 level learners  for requests and apologies in a variety of contexts. These were also rated holistically and we used only those which were rated as appropriate or very appropriate in each scenario. Native speakers also recorded responses and these were used as a point of comparison. 

3. What were the most surprising findings?

In terms of the language learners used, it was a little surprising that as levels increased, learners did not always display a greater range of vocabulary. In fact, at all levels (and in the native speaker data) there was a heavy reliance on the top two thousand words. Instead, it is the flexibility with which learners can use these words which changes as the levels increase so they begin to use them in more collocations and chunks and with different functions. There was also a tendency across levels to favour use of chunks which can be used for a variety of functions. For example, although we can presume that learners may have been taught phrase such as ‘in my opinion’ this was infrequent and instead they favoured ‘I think’ which can be used to give opinons, to hedge, to buy time etc .

In terms of discourse, the data showed that we really need to pay attention to what McCarthy has called ‘turn grammar’. A big difference as the levels increased was the increasing ability of learners to co-construct  conversations, developing ideas from and contributing to the turns of others. At B1 level, understandably, the focus was much more on the development of their own turns.

4. What findings would be most useful to language teachers?

Hopefully, in the lists of frequent words, keywords and chunks they have something which can inform their teaching at each of these levels. It would seem to be reasonable to use, as an example, the language of successful B2 level speakers to inform what we teach to B1 level speakers. Also, though tutors may present a variety of less frequent or ‘more difficult’ words and chunks to learners, successful speakers will ultimately employ lexis which is more common and more natural sounding in their speech, just as the native speakers in our data also did.

We hope the book will also give clearer guidance as to what the CEFR levels mean in terms of communicative competence and what learners can actually do at different levels. Finally, and related to the last  point, we hope that teachers will see how successful speakers need to develop all aspects of communicative competence (linguistic, strategic, discourse and pragmatic competence) and that teaching should focus on each area rather than only one of two of these areas.

There has been some criticism, notably by Stefan Th. Gries and collaborators that much learner corpus research is restricting itself factorwise when explaining a linguistic phenomenon. Gries calls for a multi-factor approach whose power can be seen in a study conducted with Sandra C. Deshors, 2014, on the uses of may, can and pouvoir with native English users and French learners of English. Using nearly 4000 examples from 3 corpora, annotated with over 20 morphosyntactic and semantic features, they found for example that French learners of English see pouvoir as closer to can than may.

The analysis for Successful Spoken English was described as follows:

“We examined the data with a mixture of quantitative and qualitative data analysis, using measures such as log-likelihood to check significance of frequency counts but then manual examination of concordance line to analyse the function of language.”

Hopefully with the increasing use of multi-factor methods learner corpus analysis can yield even more interesting and useful results than current approaches allow.

Chris and his colleagues kindly answered some follow-up questions:

5. How did you measure/assign CEFR level for students?  

Students were often already in classes where they had been given a proficiency test and placed in a level . We then gave them our speaking  test and only took data from students who had been given a global pass score of 3.5 or 4 (on a scale of 0-5). The borderline pass mark was 2.5 so we only chose students who had clearly passed but were not at the very top of the level and obviously then only those who gave us permissions to do so. The speaking tests we used were based on Canale’s (1984) oral proficiency interview design and consisted of a warm up phase, a paired interactive discussion task and a topic specific conversation based on the discussion task. Each lasted between 10-15 minutes.

6. So most of the analysis was in relation to successful students who were measured holistically?  

Yes.

7. And could you explain what holistically means here?

Yes, we looked at successful learners at each CEFR level, according to the test marking criteria. They were graded for grammar, vocabulary, pronunciation, discourse management and interactive ability based on criteria such as  the following (grade 3-3.5) for discourse management ‘Contributions are normally relevant, coherent and of an appropriate length’. These scores were then amalgamated into a global score. These scales are holistic in that they try to assess what learners can do in terms of these competences to gain an overall picture of their spoken English rather than ticking off a list of items they can or cannot use. 

8. Do I understand correctly that comparisons with native speaker corpora were not as much used as with successful vs unsuccessful students? 

No, we did not look at unsuccessful students at all. We were trying to compare successful students at B1-C1 levels and to draw some comparison to native speakers. We also compared our data to the LINDSEI spoken learner corpus to check the use of key words.

9. For the native speaker comparisons what kind of things were compared?

We compared each aspect of communicative competence – linguistic, strategic, discourse and pragmatic competences to some degree. The native speakers took exactly the same tests so we compared (as one example), the most frequent words they used.

 

Thanks for reading.

 

References:

Deshors, S. C., & Gries, S. T. (2014). A case for the multifactorial assessment of learner language. Human Cognitive Processing (HCP), 179. Retrieved from https://www.researchgate.net/publication/300655572_A_case_for_the_multifactorial_assessment_of_learner_language

 

Overloading on cognitive load theory in SLA

This is a response to a John Sweller article in 2017 on applying cognitive load theory to language teaching.

Geary and the interface hypothesis

I want to first discuss cognitive developmental and evolutionary psychologist David Geary’s, 2007,  two types of knowledge since Sweller invokes Geary to assert a critical division or discontinuity between child first language acquisition and adult second language acquisition.

Geary’s first type of knowledge (or abilities/domains/cognition, Geary uses these terms interchangeably) has evolved over human evolutionary time and is labelled primary knowledge. Such knowledge (such as your first language) is said to be fast and implicit. Geary’s second type of knowledge develops due to cultural reasons and is slower and explicit. Geary uses reading as an example of secondary type of knowledge. I have dropped the label biological as I think it is unhelpful for the present discussion.

We could see a parallel here between Geary’s division and the conscious/unconscious or explicit/implicit division discussed in second language acquisition (SLA). The following quotes of Geary:

“I focus on primary abilities because these are the foundation for the construction of secondary abilities through formal education.” Geary, 2007:3
“Academic learning involves the modification of primary abilities…” Geary, 2007:5
“I assume that primary knowledge and abilities provide the foundation for academic learning.” Geary, 2007:6

seem to indicate when applied to language that there is some sort of interface between conscious learning of language and its unconscious acquisition.

So does such an interface exist? If so how does it work? Absent answers to such questions we should accept the default position that there is no interface, that explicit conscious language knowledge is separate from implicit unconscious knowledge (John Truscott, 2015).

Discontinuities and the nature of language

Cognitive scientists such as Susan Carey (2009) class language as a core cognitive activity (core cognition differs from sensory-perceptual systems and theoretical conceptual knowledge) along with object, number, and agent cognition. And there is (largely) a continuity of such core cognitions from childhood to adulthood. Discontinuities happen with say object knowledge and physics knowledge – infants know that objects are solid yet when older the theory of physics tells them that objects are not really solid. Here the physics is “incommensurate” with object cognition and this contributes to the difficulty for students of studying physics at school. Physics is at the same time more expressively powerful than object cognition.

It is unclear from Geary what kind of discontinuity is being described or even if there is one (as the labels primary and secondary seem to point to). From what I can gather Geary seems to think that primary knowledge can help with secondary knowledge (seen as the interface position in SLA) and so the two may not be so conflictual after all. I may of course be mistaken in my reading here of Geary.

The unclarity from Geary of what kind of discontinuity he means may explain the logical leap that Sweller seems to have made, namely, adult second language acquisition is secondary knowledge and incommensurate with the child’s first language acquisition. Let’s look at the passage where he indicates this:

“Learning a second language as an adult provides an example of secondary knowledge acquisition as do most of the topics covered in educational institutions. We invented education to deal with biologically secondary information. Learning to listen to and speak a second language as an adult requires conscious effort on the part of the learner and explicit instruction on the part of instructors. Little will be learned solely by immersion. Furthermore, since learning to read and write are biologically secondary because we have not evolved to acquire these skills, they also require conscious effort by learners and explicit teaching by instructors, irrespective of whether we are dealing with a native or second language.”

Sweller seems to be mixing up literacy skills with (adult) language acquisition. And further seems to switch between the two – compare “learning to listen to and speak a second language as an adult requires conscious effort” and “learning to read and write are biologically secondary”. Also he assumes that because languages are taught in schools that means they are like other school subjects i.e. language is like developing conceptual knowledge in physics, maths, chemistry etc.

This assumption that language is like conceptual knowledge is very evident in this 1998 article by Graham Cooper and his use of a “foreign language” example to explain an aspect of cognitive load theory:

Graham Cooper “foreign language” and element interactivity

Most language teachers will find this view of language very peculiar. For example, the assumption that because a vocab item may be a single word it can be classed as a low element interaction. This ignores the semantics of single words for a start. More generally, as seen in the screenshot, there is an assumption that language is an object that can be transmitted to learners from the environment much like concepts in a subject like maths.

Ignoring SLA

I want to now comment on some more paragraphs in the Tesol Ontario article. Let’s start with the first paragraph:

“Most second language teaching recommendations place a considerable emphasis on “naturalistic” procedures such as immersion within a second language environment. Immersion means exposing learners to the second language in many of their daily activities, including other educational activities ostensibly unrelated to learning the second language.”

I guess by “naturalistic” procedures Sweller may be alluding to the Natural approach by Krashen? If so he has badly understood what that means and is badly out of date with the debate. Badly understood since the natural approach does not entail immersion and badly out of date by ignoring developments such as task based learning which arguably “includes other educational activities ostensibly unrelated to the second language”.

“Information-store principle. In order to function, we must store immeasurably large amounts of information in long-term memory. The difference between people who are more as opposed to less competent in any area including competence in a second language is heavily determined by the amount of knowledge held in long-term memory (Ericsson & Charness, 1994; Nandagopal & Ericsson, 2012).”

This may, with caveats, apply to vocabulary learning or pragmatics, but how applicable is it to other language systems such as syntax or phonology. Further the studies quoted are based on novice and experts in non-language domains like chess.

“In second language learning, this means teachers should explicitly present the grammar and vocabulary of the second language rather than expecting learners to induce the information themselves (see Kirschner et al., 2006, for alternative formulations that emphasise implicit learning) as occurs when dealing with a biologically primary task such as learning a native language as a child.”

Sweller is characterizing child acquisition as “expecting learners to induce the information”. What is meant by induction here? Does he mean usage based notions of induction where statistical information in the environment is used by the child to learn a language? If so then usage folks say the same process also happens in adult language learning and further that process is not explicit in the sense used by Sweller.

“Requiring learners to go to a separate dictionary imposes an additional cognitive load. Learners should not be required to search for needed information.”

How does this claim compare with say the involvement load hypothesis of Batia Laufer and Jan Hulstijn from 2001, where “search” is one of the cognitive components and more “search” e.g. consulting a dictionary is said to lead to better vocabulary retention? (as an aside – involvement load hypothesis was influenced by the levels of processing theory, a general critique of cognitive load theory is why should more load lead to learning problems? Contrast this to levels of processing which implies deeper (more load?) processing would lead to better performance).

“Another recommendation is to avoid redundancy. Unnecessary information frequently is processed with learners only finding after the event that they did not need to process the additional information in order to learn.”

Considering the reported benefits for novice language learners of elaborated input (not translations but “redundancy and clearer signaling of thematic structure in the form of examples, paraphrases and repetition of original information, and synonyms and definitions of low-frequency words” – Sun-Young Oh, 2001), what evidence is there that such elaborated input is not as beneficial for more expert language learners?

To conclude, note that the summary report from the Centre for Education Statistics and Evaluation (2017) which ELT Research Bites covered, describes several criticisms of cognitive load theory in general. My discussion attempted to critique the application of this theory to language acquisition. This critique is only very cursory but it is I think enough to raise serious doubts about the extent of Sweller’s awareness of SLA research and hence to take any applications very critically. This does not preclude future applications of cognitive load theory in language teaching and certainly, notwithstanding the general critiques, it is applicable in the domain of instructional design where it originated.

Thanks for reading.

References:

Carey, S. (2009). The origin of concepts. Oxford University Press.

Centre for Education Statistics and Evaluation. (August, 2017). Cognitive load theory: Research that teachers really need to understand. Downloaded from https://www.cese.nsw.gov.au/publications-filter/cognitive-load-theory-research-that-teachers-really-need-to-understand.

Cooper, G. (December 1998). Research into Cognitive Load Theory and Instructional Design at UNSW. Retrieved from http://dwb4.unl.edu/Diss/Cooper/UNSW.htm

Geary, D. C. (2007). Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology. In J. S. Carlson & J. R. Levin (Eds.), Educating the evolved mind: Conceptual foundations for an evolutionary educational psychology (pp. 1–99). Greenwich: Information Age. Retrieved from https://www.researchgate.net/publication/242124970_Conceptual_Foundations_for_an_Evolutionary_Educational_Psychology

Laufer, B., & Hulstijn, J. (2001). Incidental vocabulary acquisition in a second language: The construct of task-induced involvement. Applied linguistics, 22(1), 1-26.

OH, S. Y. (2001). Two types of input modification and EFL reading comprehension: Simplification versus elaboration. Tesol Quarterly, 35(1), 69-96.

Sweller, J. (August 2017). Cognitive load theory and teaching English as a second language to adult learners. Contact Magazine, 43(2), 5-9. Retrieved from http://contact.teslontario.org/cognitive-load-theory-esl/.

Truscott, J. (2015). Consciousness and second language learning. Multilingual Matters.

CORE blimey – genre language

A #corpusmooc participant in answering a discussion question on what they would like to use corpora for replied that they wanted a reference book that shows various common structures in various genres such as “letters of condolence, public service announcements, obituaries”.

The CORE (Corpus of Online Registers) corpus at BYU along with the virtual corpora feature allows a way to reach for this.

For example, the screenshot below shows the keywords of verbs & adjectives in the Reviews genre:

Before I briefly show how to make a virtual corpus do note that the standard interface allows you do to a lot of things with the various registers. The CORE interface shows you examples of this. For example the following shows the distribution of the present perfect across the genres:

Create virtual corpora

To create a virtual corpus first go to the CORE start page:

Then click on Texts/Virtual and get this screen:

Next press Create corpus to get this screen:

We want the Reviews Genre so choose it from the drop down box:

Then press Submit to get the following screen:

Here you can either accept these texts or say you want to build only a film review corpus manually look through links and filter for film reviews only. Give your corpus a name or add it to an already existing corpus. Here we give it the name “review”:

Then after submitting you will be taken to the following screen which shows you all your virtual corpora collection we can see the corpus we just created at number 5:

Now you can list keywords.

Do note that the virtual corpora feature is available in most of the BYU collection so if genre is not your thing maybe the other choices of corpora might be useful.

Thanks for reading and do let me know if anything appears unclear.