Successful Spoken English – interview with authors

The following is an email interview with the authors, Christian Jones, Shelley Byrne, Nicola Halenko, of the recent Routledge publication Successful Spoken English: Findings from Learner Corpora. Note that I have not yet read this (waiting for a review copy!).

Successful Spoken English

1. Can you explain the origins of the book?

We wanted to explore what successful learners do when they speak and in particular learners from B1-C1 levels, which are, we feel, the most common and important levels. The CEFR gives “can do” statements at each level but these are often quite vague and thus open to interpretation. We wanted to discover what successful learners do in terms of their linguistic, strategic, discourse and pragmatic competence and how this differs from level to level.  

We realised it would be impossible to use data from all the interactions a successful speaker might have so we used interactive speaking tests at each level. We wanted to encourage learners and teachers to look at what successful speakers do and use that, at least in part, as a model to aim for as in many cases the native speaker model is an unrealistic target.

2. What corpora were used?

The main corpus we used was the UCLan Speaking Test Corpus (USTC). This contained data from only students  from a range of nationalities who had been successful (based on holistic test scoring) at each level, B1-C1. As points of comparison, we also recorded native speakers undertaking each test. We also made some comparisons to the LINDSEI (Louvain International Database of Spoken English Interlanguage) corpus and, to a lesser extent, the spoken section of the BYU-BNC corpus.

Test data does not really provide much evidence of pragmatic competence so we constructed a Speech Act Corpus of English (SPACE) using recordings of computer-animated production tasks by B2 level learners  for requests and apologies in a variety of contexts. These were also rated holistically and we used only those which were rated as appropriate or very appropriate in each scenario. Native speakers also recorded responses and these were used as a point of comparison. 

3. What were the most surprising findings?

In terms of the language learners used, it was a little surprising that as levels increased, learners did not always display a greater range of vocabulary. In fact, at all levels (and in the native speaker data) there was a heavy reliance on the top two thousand words. Instead, it is the flexibility with which learners can use these words which changes as the levels increase so they begin to use them in more collocations and chunks and with different functions. There was also a tendency across levels to favour use of chunks which can be used for a variety of functions. For example, although we can presume that learners may have been taught phrase such as ‘in my opinion’ this was infrequent and instead they favoured ‘I think’ which can be used to give opinons, to hedge, to buy time etc .

In terms of discourse, the data showed that we really need to pay attention to what McCarthy has called ‘turn grammar’. A big difference as the levels increased was the increasing ability of learners to co-construct  conversations, developing ideas from and contributing to the turns of others. At B1 level, understandably, the focus was much more on the development of their own turns.

4. What findings would be most useful to language teachers?

Hopefully, in the lists of frequent words, keywords and chunks they have something which can inform their teaching at each of these levels. It would seem to be reasonable to use, as an example, the language of successful B2 level speakers to inform what we teach to B1 level speakers. Also, though tutors may present a variety of less frequent or ‘more difficult’ words and chunks to learners, successful speakers will ultimately employ lexis which is more common and more natural sounding in their speech, just as the native speakers in our data also did.

We hope the book will also give clearer guidance as to what the CEFR levels mean in terms of communicative competence and what learners can actually do at different levels. Finally, and related to the last  point, we hope that teachers will see how successful speakers need to develop all aspects of communicative competence (linguistic, strategic, discourse and pragmatic competence) and that teaching should focus on each area rather than only one of two of these areas.

There has been some criticism, notably by Stefan Th. Gries and collaborators that much learner corpus research is restricting itself factorwise when explaining a linguistic phenomenon. Gries calls for a multi-factor approach whose power can be seen in a study conducted with Sandra C. Deshors, 2014, on the uses of may, can and pouvoir with native English users and French learners of English. Using nearly 4000 examples from 3 corpora, annotated with over 20 morphosyntactic and semantic features, they found for example that French learners of English see pouvoir as closer to can than may.

The analysis for Successful Spoken English was described as follows:

“We examined the data with a mixture of quantitative and qualitative data analysis, using measures such as log-likelihood to check significance of frequency counts but then manual examination of concordance line to analyse the function of language.”

Hopefully with the increasing use of multi-factor methods learner corpus analysis can yield even more interesting and useful results than current approaches allow.

Chris and his colleagues kindly answered some follow-up questions:

5. How did you measure/assign CEFR level for students?  

Students were often already in classes where they had been given a proficiency test and placed in a level . We then gave them our speaking  test and only took data from students who had been given a global pass score of 3.5 or 4 (on a scale of 0-5). The borderline pass mark was 2.5 so we only chose students who had clearly passed but were not at the very top of the level and obviously then only those who gave us permissions to do so. The speaking tests we used were based on Canale’s (1984) oral proficiency interview design and consisted of a warm up phase, a paired interactive discussion task and a topic specific conversation based on the discussion task. Each lasted between 10-15 minutes.

6. So most of the analysis was in relation to successful students who were measured holistically?  

Yes.

7. And could you explain what holistically means here?

Yes, we looked at successful learners at each CEFR level, according to the test marking criteria. They were graded for grammar, vocabulary, pronunciation, discourse management and interactive ability based on criteria such as  the following (grade 3-3.5) for discourse management ‘Contributions are normally relevant, coherent and of an appropriate length’. These scores were then amalgamated into a global score. These scales are holistic in that they try to assess what learners can do in terms of these competences to gain an overall picture of their spoken English rather than ticking off a list of items they can or cannot use. 

8. Do I understand correctly that comparisons with native speaker corpora were not as much used as with successful vs unsuccessful students? 

No, we did not look at unsuccessful students at all. We were trying to compare successful students at B1-C1 levels and to draw some comparison to native speakers. We also compared our data to the LINDSEI spoken learner corpus to check the use of key words.

9. For the native speaker comparisons what kind of things were compared?

We compared each aspect of communicative competence – linguistic, strategic, discourse and pragmatic competences to some degree. The native speakers took exactly the same tests so we compared (as one example), the most frequent words they used.

 

Thanks for reading.

 

References:

Deshors, S. C., & Gries, S. T. (2014). A case for the multifactorial assessment of learner language. Human Cognitive Processing (HCP), 179. Retrieved from https://www.researchgate.net/publication/300655572_A_case_for_the_multifactorial_assessment_of_learner_language

 

Advertisements

Counting countability – EFCAMDAT

This is a short note to my previous post on using the EFCAMDAT learner corpus. I read an interesting paper on Countability in World Englishes (HT Pascual Pérez-Parede@perezparedes) and thought it would be interesting to look at the mass nouns that the study used in the French learner corpus of EFCAMDAT that I had downloaded.

So using AntConc  I counted the total uses of the mass nouns and the number of (mis)uses of the mass noun as count nouns. The list below shows the percentage of these mass nouns used as countable nouns i.e. in plural form or with a/an in front of the noun:

  1. Luggage 30%
  2. Information 28%
  3. Software 27%
  4. Evidence 26%
  5. Baggage 25%
  6. Advice 24%
  7. Homework 23%
  8. Knowledge 15%
  9. Research 12%
  10. Furniture 10%
  11. Violence 10%
  12. Feedback 9%
  13. Equipment 7%

The frequency of these words compared to the whole sub-corpus of French nationality range from – 343 hits per million (Information) to 68 hits per million (Equipment).

So in the classroom I could test use of luggage (baggage), information, software, evidence, advice and homework.

NB1: The study on world englishes is critical of the overemphasis of language teaching on such mass nouns and argues that in terms of mutual intelligibility there is not much difference in using them correctly or not. In this sample of French learners the percentages are quite high (compared to the study) so it seems worth spending time on.

NB2: underwear had highest percentage of 100% but that is because the sole instance was the (mis)use i.e. 1 hit of underwears. 🙂

Thanks for reading.

Getting learner data for vocabulary activities – EFCAMDAT

I was reading Philip Kerr’s great article on How to write vocabulary activities while thinking about creating an exercise on using the word important.

French students often make errors when using it to describe meanings related to size. E.g. The Tour First is the most important office building in Paris. Where they meant to say the tallest/biggest office building.

One way to get ready-made examples of learner writing is to use a learner corpus. Unfortunately there is, to the best of my knowledge, only one currently free learner corpus. (update: Diane Nicholls ‏@lexicoloco has written about an Asian learner corpus; an open Polish learners of English corpus is available; a Taiwanese learners of English corpus; a Japanese EFL learner corpus)

This post describes one way to make use of the EF-CAMbridge open language DATabase, EFCAMDAT. This is a corpus of written learner English containing over 30 million words.

You’ll need to register which is free and painless. Other learner corpora peeps take note.

The Select scripts screen looks like this:

EFCAMDAT-selectscript

You can see that I have selected all the scripts (highlighted in blue) and that I have selected only French nationality (highlighted in red). You use the middle windows to make your selection by pressing the add button.

Now, next, you could query the system but their search system takes a bit getting used to. For example the syntax for a simple query for the word important is – [word=”important”]. Furthermore there is no easy way to see the wider text of the search result.

So it is better to just download the corpus by exporting it as shown in the next screen:

EFCAMDAT-exportdata

You can see some info about the scripts you have selected (highlighted in green) – approximately 1.46 million words, from 4138 learners, 1 nationality (French in this case), covering all the 16 levels.

The unit of interest radio button should be selected scripts; the information included should be ticked as raw script text; and the export format radio button should be XML compressed.

Once you have this xml file we can open it up in our favorite concordancer – AntConc.

By sorting the wordlist as shown below we can see any spelling variations of important:

AntConc-wordlist_important-trimmed

So some issue with interference of French spelling though not a major problem here.

Next we can look through the concordance lines for important and pick out some sentences to use in an exercise. The following have been adapted so as to only focus on the use of the word important:

Like a majority of people I have two TVs and watch it more than I did five years ago, mainly because of the important choice of channels.

Flyfair Airlines is one of the most important airline companies in Asia and Creamium aims to increase its market in Asia

I will be a responsible student President and I believe that with your help we could achieve an important improvement in our study conditions.

I don’t have an important vocabulary in English and my accent is not very good

The winner is the person with the most important score.

Although the international sales were higher during the 3 first years , the national sales have been more important since 2006.

I work in a big library, the most important in Montpellier .

It’s by far the most important salary I ‘ve ever seen for this kind of job !

We are an important company in manufacturing based in Manchester.

The damage is very important , everything was destroyed.

Depending on the level of your group you could give them appropriate words to substitute. Further work could be done on most frequent collocates e.g. which is more frequent word:

The damage is very severe/extensive/large/ bad.

Thanks for reading.