Affix knowledge test and word part technique

CAT-WPLT

There is a new online test, the CAT-WPLT (computerized adaptive testing of Word Part Levels Test) to assess students word part knowledge, i.e. prefix, suffix and stems (though the test only uses affixes for receptive use). The (diagnostic) test is composed of three parts – form, meaning and use. The form part presents 1 real affix and 4 distractor affixes for the test user to choose. The meaning part presents 1 correct meaning and 3 distractor meanings and the use part presents 4 parts of speech to match one of these correctly to the affix.

Try out the test – CAT-WPLT.

The online test takes about 10-15mins to complete and results in a nice feedback screen showing how the test taker did on the form, meaning and use of the affixes. There are comparison advanced, intermediate and beginner profiles.

Figure from Mizumoto, Sasao, & Webb (2017) pg. 14

So say you have a profile of a student who shows weakness in form and meaning. What now? Mizumoto,  Sasao, & Webb (2017) suggest giving learners their pdf list of 118 affixes (assuming you don’t need to use the test again). So if your learner is at level 1 for recognizing the form of an affix, the affixes listed as level 2 can be focused on.

Another possibility is a memory technique called the word part technique.

Word part technique
Very simply it is using an already known word which contains the same word stem/root as the new word to be remembered.

More specifically the system Wei and Nation (2013) describe lists very frequent stems i.e. stems which appear in words in the most frequent 2000 words of the BNC. These are then used to learn stems appearing in the remaining 8000 mid-frequency words in the BNC wordlist. For example a high frequency word like visit has the root -vis- which appears in mid-frequency words such as visible, envisage, revise.

Once a form connection is seen between a known high frequency word and a mid-frequency word a meaning connection needs to be made i.e. explaining the form connection. So to explain the word visible we can say visible is something that you can see. Here the explanation uses the meaning of -vis- i.e. see.

(high freq. word) visit -> go to see someone
|
|
(stem)                  vis -> see
|
|
(mid-freq. word)  visible -> something that you can see

According to Wei & Nation (2013) the most difficult step is explaining the connection. Though I think the most difficult is the first step – seeing the connection i.e. the stem/root word. Wei & Nation (2013) encouragingly state that making the connection and explaining it can develop with practice.

 

Screen Shot 2017-09-07 at 2.03.44 PMScreen Shot 2017-09-07 at 2.04.08 PMScreen Shot 2017-09-07 at 2.04.21 PMScreen Shot 2017-09-07 at 2.04.31 PM

Click here to see top 25 word stems taken from Wei & Nation (2013)


They go on to recommend that once students have worked with this technique with the teacher they can go on to use it themselves as a strategy.

The technique’s efficacy is on par with the keyword technique and learners own methods or self-strategies (Wei, 2015). The word part technique has the added benefits that come with the nature of etymology and the history of words.

Thanks for reading.

References

Mizumoto, A., Sasao, Y., & Webb, S. A. (2017). Developing and evaluating a computerized adaptive testing version of the Word Part Levels Test. Language Testing, 0265532217725776.

Wei, Z., & Nation, P. (2013). The word part technique: A very useful vocabulary teaching technique. Modern English Teacher, 22, 12–16.

Wei, Z. (2015). Does teaching mnemonics for vocabulary learning make a difference? Putting the keyword method and the word part technique to the test. Language Teaching Research, 19(1), 43-69.

Advertisements

Corpus linguistics community news 8

First up is the news that there are more than 700 members. Nice.

Important date for your diaries is 25 September 2017 when another round of #corpusmooc is launching. This time new sections are promised and most notable new addition is a new version of LancsBox. Check out the following two cute vids being used to promote #corpusmooc 2017:

 

Also if you use Twitter you can follow the bot corpusmoocRT@corpusmoocFav.

Next up are some great plenary videos from this years Corpus Linguistics 2017 knees-up in Birmingham plus related notes from conference by John Williams.

Checking the distribution of the pair on the one hand/on the other hand in BYU-COCA sections.

A graphic trying to depict keywords as calculated in AntConc.

A possible way to find collocations suitable for various proficiency levels.

And finally for a bit o’ fun is this the longest term in ELT? And The Banbury Corpus Revisted by Michael Swan.

Thanks for reading and for those coming off a summer break much energy to you for the new teaching year.

 

ELTJam, machine learning, knowledge and skill

“Knowledge and skill are different. Vocabulary acquisition tools help learners improve their knowledge, which may in turn have a positive impact on skill, but it’s important to be cognisant of the differences.” [https://eltjam.com/machine-learning-summer-school-day-4/]

“We need to be careful though not to oversell the technology and be clear about what it can and can’t do. There is no silver bullet. This is especially the case when it comes to skills vs knowledge; a lot of the applications that could come from this sort of technology will help improve knowledge of English, and may contribute to accuracy” [https://eltjam.com/machine-learning-summer-school-day-5/]

The above two quotes are from a nice series of posts by ELTJam on a machine learning workshop. The first point from the first quote is indeed important to recognize. Bill VanPatten (2010) has argued that knowledge and skill are different. However what is meant by knowledge and what is meant by skill? For a nice video summary of the VanPatten paper see the video linked below.

Knowledge is mental representation which in turn is the abstract, implicit and underlying linguistic system in a speaker’s head. Abstract does not mean the rules in a pedagogical grammar rather it refers to a collection of abstract properties which can result in rule-like behaviors. Implicit means that the content of mental representation is not accessible to the learner consciously or with awareness. Underlying refers to the view that a linguistic system underlies all surface forms of language.

The actual content of mental representations include all formal features of syntax, phonology, lexicon-morphology, semantics. And a mental representation grows due to input being acted on by systems from the learners mind/brain.

Skill is the speed and accuracy with which people can do certain behaviours. For language skill this refers to reading, listening, writing, speaking, conversational interaction, turn taking. To be sure being skilled means that the person has a developed mental representation of the language. However having a developed mental representation does not entail being skilled. How skill develops depends on the tasks that people are doing. A person learns to swim by swimming. A person learns to write essays by writing essays.

It follows that the Write&Improve (W&I) tool (as the flagship example of machine learning based tool for language learning) can be seen as targeting how to be skillful in writing Cambridge English Exam texts. The claim that machine learning, and by implication the feedback by W&I, is changing the knowledge of the learner’s English does not accord with VanPatten’s description of knowledge as mental representation. His description implies that no explicit information, in the form of feedback in the case of the writing tool, can lead to changes in the mental representation of the language of writing. He states that research into writing is unclear as to whether feedback impacts writing development.

My point in this post is to briefly clarify the distinction between knowledge and skills (do read the VanPatten paper) and to suggest that the best machine learning based tools can offer are opportunities for students to practice certain skills.

Postnote

W&I has never claimed that its tool has impact on language knowledge. See Diane Nicholls comment below.

References

Van Patten, B. (2010). The two faces of SLA: Mental representation and skill. International Journal of English Studies, 10(1), 1-18. PDF available [https://www.researchgate.net/publication/267793221_The_Two_Faces_of_SLA_Mental_Representation_and_Skill]

BlackBox Videocast 2: Mental Representation and Skill

What Chomsky said about “native speakers” in 1985

This is taken from a rambling but fascinating project by lexicographer Thomas M. Paikeday titled The Native Speaker is Dead published in 1985. He sent a 10 point memo to some linguists on the question of what is a native speaker.  I thought it would be useful to put this up here, since notable ELT bods such as Scott Thornbury used a recent native speaker debate to critique Chomsky (see Geoff Jordan’s response). As to whether Chomsky answered the memo is up for grabs. Personally I think, like David Crystal who also responded to the Paikeday memo, that Chomsky deftly sidesteps the import of the initial memo. The Paikeday book is available on the net but takes some searching, let me know and I can email it to anyone interested.

I marked one passage in orange as it is not clear if this was a response to a specific and separate question asked by Paikeday (on what Chomsky meant by “grammaticalness” from his book Aspects of the theory of syntax) or whether it was excerpted from the response Chomsky gave to the Paikeday memo. In Paikeday’s book this passage is the first one but it seems to be oddly placed to me.

Chomsky:

I read your comments on the concept “native speaker” with interest. In my view, questions of this sort arise because they presuppose a somewhat misleading conception of the nature of language and of knowledge of language. Essentially, they begin with what seem to me incorrect metaphysical assumptions: in particular, the assumption that among the things in the world there are languages or dialects, and that individuals come to acquire them.

And then we ask, is an individual who has acquired the dialect D a native speaker of it or not, the question for which you request an “acid test” at the end of your letter.

In the real world, however, what we find is something rather different, though for the usual purposes of ordinary communication it is sufficient to work with a rather gross approximation to the facts, just as we refer freely to water, knowing, however, that the various things we call “water” have a wide range of variation including pollutants, etc.

To see what’s wrong with the question, let’s consider a similar one (which no one asks). Each human being has developed a visual system, and in fact visual systems differ from individual to individual depending on accidents of personal history and maybe even genetic differences. Suppose we go on (absurdly) to assume that among the things in the world, independently of people, there are visual systems, and particular individuals acquire one or the other of them (in analogy to the way we think of languages).

Then we could ask, who has a “native” visual system V, and what is the acid test for distinguishing such a person from someone who has in some more complex or roundabout way come to be “highly proficient” in the use of V (say, by surgery, or by training after having “natively” acquired a different visual system, etc.). Of course, all of this is nonsense.

But I think uncritical acceptance of the apparent ontological implications of ordinary talk about language leads to similar nonsense.

What we would say in the case of the visual system is this. There is a genetically determined human faculty V, with its specific properties, which we can refer to as “the organ of vision.” There may be differences among individuals in their genetic endowment, but for the sake of discussion, let’s put these aside and assume identity across the species, so we can now speak of the visual organ V with its fixed initial state V-0 common to humans, but different from monkeys, cats, insects, etc. In the course of early experience, V-0 undergoes changes and soon reaches a fairly steady state V-s which then remains essentially unchanged apart from minor modifications (putting aside pathology, injury, etc.). That’s the way biological systems behave, and to a very good first approximation, this description is adequate. The things in the real world are V-0 and the various states V-s attained by various individuals, or more broadly, the class of potential states V-s that could be attained in principle as experience varies.

We then see that the question about “native” acquisition is silly, as is the assumption that visual systems exist in some Platonic heaven and are acquired by humans.

Suppose now that we look at language in essentially the same way – as, I think, we should – extricating ourselves from much misleading historical and philosophical baggage. Each human has a faculty L, call it “the language faculty” or, if you like, “the language organ,” which is genetically-determined.

Again, we may assume to a very good first approximation that [the language faculty or language organ] is identical across the species (gross pathology aside), so that we can speak of the initial state L-0 of this organ, common to humans, and as far as is known, unique in the universe to the human species (in fact, with no known homologous systems in closely related or other species, in contrast now to V). In early childhood, the organ undergoes changes through experience and reaches a relatively stable steady state L-s, probably before puberty; afterwards, it normally undergoes only marginal changes, like adding vocabulary. There could be more radical modifications of a complex sort, as in late second language learning, but in fact the same is very likely true of the visual system and others.

Putting these complications aside, what is a “language” or “dialect”? Keeping to the real world, what we have is the various states L-s attained by various individuals, or more generally, the set of potential states L-s attained that could in principle be attained by various individuals as experience varies. Again, we see that the question of what are the “languages” or “dialects” attained, and what is the difference between “native” or “non-native” acquisition, is just pointless.

Languages and dialects don’t exist in a Platonic heaven any more than visual systems do. In both cases, there is a fixed genetic endowment that determines the initial state of some faculty or organ (putting aside possible genetic variation), and there are the various states attained by these systems in the course of maturation, triggered by external stimuli and to some rather limited extent shaped by them. In both cases, there is overwhelming reason to believe that the character of the steady state attained is largely determined by the genetic endowment, which provides a highly structured and organized system which does, however, have certain options that can be fixed by experience.

We could think of the initial state of the language faculty, for example, as being something like an intricately wired system with fixed and complex properties, but with some connections left open, to be fixed in one or another way on the basis of experience (e.g., do the heads of constructions precede their complements as in English, or follow them as in Japanese?). Experience completes the connections, yielding the steady state, though as in the case of vision, or the heart, or the liver, etc., various other complications can take place. So then what is a language and who is a native speaker? Answer, a language is a system L-s, it is the steady state attained by the language organ. And everyone is a native speaker of the particular L-s that that person has “grown” in his / her mind / brain. In the real world, that is all there is to say.

Now as in the case of water, etc., the scientific description is too precise to be useful for ordinary purposes, so we abstract from it and speak of “languages,” “dialects,” etc., when people are “close enough” in the steady states attained to be regarded as identical for practical purposes (in fact, our ordinary usage of the term “language” is much more abstract and complex, in fact hardly coherent, since it involves colors on maps, political systems, etc.). All of that is fine for ordinary usage. Troubles arise, however, when ordinary usage is uncritically understood as having ontological implications; the same problems would arise if we were to make the same moves in the case of visual systems, hearts, water, etc.

About the term “grammaticalness,” I purposely chose a neologism in the hope that it would be understood that the term was to be regarded as a technical term, with exactly the meaning that was given to it, and not assimilated to some term of ordinary discourse with a sense and connotations not to the point in this context.

Such questions as “how many languages are there” have no clear meaning; we could say that there is only one language, namely, L-0 with its various modifications, or that there are as many languages as there are states of mind/brain L-s, or potential states L-s. Or anything in between. These are questions of convenience for certain purposes, not factual questions, like the question of “how many (kinds of) human visual system are there?”

Apparent problems about the number of languages, native speakers, etc. arise when we make the kind of philosophical error that Wittgenstein and others warned against.

I think that looked at [my] way, the questions you raise no longer seem puzzling, and in fact dissolve.

References:

Paikeday, T. M. (1985). The native speaker is dead! An informal discussion of a linguistic myth with Noam Chomsky and other linguists, philosophers, psychologists, and lexicographers. Toronto and New York: Paikeday Publishing

A FLAIR, VIEW of a couple of interesting language learning apps

FLAIR(Form-focused Linguistically Aware Information Retrieval) is a neat search engine that can get web texts filtered through 87 grammar items (e.g. to- infinitives, simple prepositions, copular verbs, auxiliary verbs).

The screenshot below shows results window after a search using the terms “grenfell fire”.

There are 4 areas I have marked A, B ,C and D which attracted my attention the most. There are other features which I will leave for you to explore.

A – Here you can filter the results of your search by CEFR levels. The numbers in faint grey show how many documents there are in this particular search total of 20.

B – Filter by Academic Word List, the first icon to right is to add your own wordlist.

C – The main filter of 87 grammar items. Note that some grammar items are more accurate than others.

D – You can upload you own text for FLAIR to analyze.

Another feature to highlight is that you can use the “site:” command to search within websites, nice. A paper on FLAIR 1 gives the following to try: https://www.gutenberg.org; http://www.timeforkids.com/news; http://www.bbc.co.uk/bitesize; https://newsela.com; http://onestopenglish.com.

The following screenshot shows an article filtered by C1-C2 level, Academic Word List and Phrasal Verbs:

VIEW (Visual Input Enhancement of the Web) is a related tool that learners of English, German, Spanish and Russian can use to highlight web texts for articles, determiners, prepositions, gerunds, noun countability and phrasal verbs (the full set currently available only for English). In addition users can do some activities, such as clicking, multiple-choice and practice (i.e fill in a blank), to identify grammar items. The developers call VIEW an intelligent automatic workbook.

VIEW comes as browser add-on for Firefox, Chrome and Opera as well as a web app. The following screenshot shows the add-on for Firefox menu:

VIEW draws on the ideas of input enhancement as the research rationale behind its approach. 2

References:

1. Chinkina, M., & Meurers, D. (2016). Linguistically aware information retrieval: providing input enrichment for second language learners. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, San Diego, CA. PDF available [http://anthology.aclweb.org/W/W16/W16-0521.pdf]
2. Meurers, D., Ziai, R., Amaral, L., Boyd, A., Dimitrov, A., Metcalf, V., & Ott, N. (2010, June). Enhancing authentic web pages for language learners. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 10-18). Association for Computational Linguistics. PDF available [http://www.sfs.uni-tuebingen.de/~dm/papers/meurers-ziai-et-al-10.pdf]

Horses for courses #researchbites

Scott Thornbury weighed into a recent debate on the use of the construct native speaker in second language acquisition (SLA) with this:

“Hi Marek. A bit late in the day but… I suspect that Geoff insists on the NS-NNS distinction because it is absolutely central to the Chomskyan project (to which he is fervently – dare I say uncritically – committed) which presupposes an innately determined (hence genetic) language learning device which, like milk teeth, can only be available for a very limited period, whereafter general (i.e. non-language specific) learning abilities kick-in, accounting for the less than ‘native-like’ proficiency levels attained by late-starters. If, on the other hand, you take the perfectly plausible view (e.g. argued by Michael Tomasello, Nick Ellis, and many others) that general (i.e. non-language specific) learning capacities are implicated in language acquisition from the get-go, and hence that there is no need to hypothesise either a genetically-programmed language acquisition device nor a qualitative difference between native and non-native speakers, then the whole Chomskyan enterprise collapses, taking with it the distinction between man and beasts, and leading to the end of civilization as we know it.” [https://teflequityadvocates.com/2017/05/13/of-native-speakers-and-other-fantastic-beasts/comment-page-1/#comment-5049]

Here we see an assumption that theories in SLA necessarily have to conflict. This ELT Research Bites blog carnival entry describes a different position by Jason Rothman and Bill VanPatten – On multiplicity and mutual exclusivity: The case for different SLA theories published in 2013.

Why are there various theories about adult SLA?

Why so many and why not convergence onto one theory? An analogy to physics is made – at the macro level there is general relativity, whilst at the micro level quantum theory. Those theories further subdivide depending on the area of interest. More importantly we cannot assume SLA is a unitary or singular thing. It is multifaceted and so there are multiple theories which look at those many different aspects of SLA. This evokes the story of the many wise blind scholars describing the many parts of an elephant.

So SLA can look at the internal issues of acquisition (e.g. input processing, output processing, internal representation, storage, retrieval) or it can look at external issues of acquisition such as interaction and its factors (e.g. context, social roles, identity, communicative intent).

How do various theories treat the S, the L and the A of SLA?

All theories can be said to assume that “second” means any language learned after acquisition of the first in childhood. Rothman and VanPatten go on to put various theories and frameworks into 4 groups:

  1. Language is a mental construct – generative approach, connectionism, input processing, processability theory
    2. Language is a socially mediated construct or originates from communication – systemic-functional approaches, socio-cultural theory
    3. Language is a hybrid mental/social-communcative construct – spoken language grammar, socio-cultural theory
    4. Language is not specified – interactionist framework, skill acquisition theory, dynamic systems theory

If we look into the particular groups we can further subdivide, e.g. for group 1 there is a division between those that see language as domain specific and modular (generative approach, input processing) or not (connectionism). In group 4 there may be no clear view on the precise nature of language but they are clear on what it is not. Dynamic systems theory for example rejects the generative view that language is modular and has innate components.

Each theory’s view of language affects how they think language is acquired and what causes the change in acquisition – e.g. a generative view would see most acquisition from universal constraints by learner internal language specific mechanisms whereas connectionism would see acquisition as exclusively sourced from external stimuli in coordination with general non-language specific mechanisms.

How does environmental context influence theories?

For theories that see language as primarily a mental construct they are interested in how language becomes represented. So generative, connectionism, input processing and processibility theory see external contexts as independent of their concerns. By contrast theories such as skill acquisition and sociocultural are focused on factors unrelated to grammatical representation and processing. Rather they look at the roles of practice, negotiation, interaction, attitude, participant relationships, aptitude, motivation etc.
Consequently some theories that have direct implications or are based on classroom contexts will be popular with teachers. Whereas others with no classroom basis will be seen as more abstract and less useful for teachers.

To what extent are theories in competition?

Coming back to Scott’s implication that either Chomsky is right and connectionism is wrong or vice versa, Rothman and Vanpatten argue that theories can be seen as more complementary than generally thought. For example acquiring vocabulary and surface forms can arguably be best described using connectionism whilst a generative approach can best describe syntactic acquisition.

In skill acquisition theory, it is assumed that domain general mechanisms are at play but this is only so if we don’t see a distinction between learning and acquisition. If we do make the distinction then what skill acquisitionists are describing is learning – a process where meta-linguistic knowledge, independent of competence, forms a separate system of performance. Whilst generative approaches are concerned with acquisition – where syntactical knowledge is processed and represented.

Rothman and VanPatten admit that skill acquistionist bods may well disagree with the description presented but the simple point is that such a description is possible.

So?

Returning to the debate on how SLA conceptualises native speakers, we can say that theories concerned with mental representation of language use the construct of the native speaker at a larger abstract level for their purposes. Meanwhile socio-cultural theorists are concerned with contextual and environmental questions and the native speaker construct at more granular levels is problematic and may need to be discarded.

References:

Rothman, J. & VanPatten, B. (2013). On multiplicity and mutual exclusivity: The case for different SLA theories. In M. P. García-Mayo, M.J. Gutiérrez-Mangado, & M. Martínez Adrián (Eds.), Contemporary approaches to second language acquisition (pp. 243–256). Amsterdam: John Benjamins. Available at (pdf)[https://www.researchgate.net/publication/263804781_On_Multiplicity_and_Mutual_Exclusivity_The_Case_for_Different_SLA_Theories]

 

#IATEFL 2017 – ELF, Juncker and the striptease dance of culture

Apologies in advance for bandwagoning a news item and IATEFL 2017, I hope my attempt is worth a read.

The switch from talking about language to talking about culture is an easy one to make. So it is not surprising that defenders of native speakerism invoke culture as a reason people want to learn a language from a native speaker. English culture from a native speaker is a proxy for ownership. As Martin Kayman notes 1 there is a long tradition of talking about language that involves the idea of property. He cites the definition of English by Dr. Johnson in his famous dictionary “E‘NGLISH. adj. [engles, Saxon]. Belonging to England”. More recently Henry Widdowson asserted “[English] is not a possession which [native speakers] lease out to others, while still retaining the freehold. Other people actually own it.” And Jacques Derrida stated “I only have one language; it is not mine”

Claims that to learn a language one needs to know its culture are heavily imbibed with ideas of ownership.

When Marek Kiczkowiak states 2 that “English is a global language. It’s the official language of over 50 countries.” he is trying to highlight the release of English ownership from its far British and near US history. Kayman points out that the first modern efforts of dis-embedding English from the old narrative can be seen in the simultaneous development of Communicative Language Teaching and the modern technologies of global communication such as the internet, email etc. That is, English was the preferred language for communication through its association with the evolving technologies while at the same time language pedagogy was promoting communicative functions rather than linguistics structure or cultural content. So in this way culture as a communicative function became available to all.

An audience member at the IATEFL ELT Journal debate on ELF hints 3 at this Kayman origin story (though her immediate point is about dilution of the term ELF):
“I don’t know about linguistic imperialism but it seems to me that ELF is becoming as pervasive and invasive in its claims to relevance and just as unclear to me as the term communicative once was. I can remember hearing things here about accommodation, about communication strategies. And also wondering a little bit how some interpretations of ELF are any different from interpretations and pedagogic implications of dealing with interlanguage once was.”

Yet Kayman argues this subordination of culture to functional properties of communication did not really release it from its English inheritance. The spread of communicative language teaching was mainly due to British, Australian and American academics, the new materials from Anglo publishing houses, new methods promoted through the British Council etc. Kayman points to the work of Robert Phillipson which showed that the adoption of English as a global language is fundamentally incompatible with an emancipatory project. The alternative approach is multilingualism. By contrast ELF promotes English.

ELF moves the subject from the native speaker to the non-native speaker and hence can be said to complete the project  started by communicative language in the 70s and 80s. This shifting of the subject of English runs in parallel with the shifting of the site of English from the home nations of the language as Marek points out “It’s the official language of over 50 countries”.

This means that ELF and globalization are intimately entwined and hence English is privileged in the project of globalization. Further with the use of the term lingua franca in preference to international language, world English, world Englishes, global English, etc. Kayman sees a return to the vexed issue of ownership.

Marek asks “So what does culture even mean in relation to the English language?”. The defenders of native speakerism claim that English is still owned by the home countries whilst advocates of non-native speakerism claim English is a language where notions of culture are devoid of meaning.

A Forbes magazine writer, on a recent tongue-in-cheek claim (on the slow loss of English in Europe) by polyglot EU President Jean Claude Juncker, recalls 4:
“And as someone with some decades of working and living in non-English speaking lands I really should point out that English becomes more important the fewer English there are about…However, the thing about is (sic) is that it is relentlessly stripped of anything which is not a shared cultural idea.”

So English can be “stripped” of its cultural baggage and be used instrumentally by those who wish to do so. Yet can language so easily escape its cultural history? New meanings are not created out of nothing, hybrid forms are possible because language carries potential meaning that are dependent on culture and enacted and traced to specific contexts. Kayman claims that Jennifer Jenkins’ view of ELF as a bastard child can only be so in an “English” way. He states that English can only be free from cultural locations to the extent illustrated by John Locke  “in the beginning all the World was America”.

The new American world was an empty land, land owned by no one. Jenkins’ is a postmodern inversion of Locke’s imperialistic concept of  America. Locke and Jenkins, though having opposing aims (colonial justification and tool for emancipation respectively), have in common that both are u-topian – spaces where things exist without already being the property of anyone in particular. Yet the drive of globalization and the  commodification of everything includes ELF, English as a global language, world Englishes etc. These commodities are branded with the emancipatory vision of globalization. Seen in slogans by the British Council such as “making a world of difference”.

And so the cultural political dance of English..the culture dance of English..the dance of English continues to be performed by many different players, in many different settings.

Notes:

1. Kayman, M. A. (2009). The lingua franca of globalization:“filius nullius in terra nullius”, as we say in English. Nordic Journal of English Studies, 8(3), 87-115. (pdf) – [http://ojs.ub.gu.se/ojs/index.php/njes/article/download/361/354]

2. Native speakers know the culture? – BBELT 2017 plenary part 2

3. IATEFL 2017 ELT Journal Debate

4. Jean Claude Juncker Insists English Is Losing Importance In Europe – In English To Be Understood [https://www.forbes.com/sites/timworstall/2017/05/05/jean-claude-juncker-insists-english-is-losing-importance-in-europe-in-english-to-be-understood/#2ba975ae57f2]