Language learning technology can be so much more than what commercial groups are offering right now. The place to look is to independent developers and teachers who are innovating in this area. Adam Leskis is one such person and here he discusses his views and projects.
1. Can you tell us a little of your background.
I started out in my first career as an English teacher, and it was clear to me that there were better ways we could both create and distribute digital materials for our students. As an example, during my last year of professional teaching (2015), the state of cutting edge tech integration was taking a powerpoint from class and uploading it to youtube.
What struck me in particular was the way in which technology was being used primarily only in a capacity to reproduce traditional classroom methods of input rather than actually taking advantage of the advanced capabilities of the digital medium. I saw paper handout being replaced by uploaded PDFs, classroom discussions replaced by online forums, and teacher-fronted lectures replaced by videos of teachers speaking.
I knew I wanted to at least try to do something about it, so I set off teaching myself how to use the tools to create things on the internet. I eventually got good enough to be hired to do web development full time, and that’s what I’ve been doing ever since.
2. In what ways do you feel technology can help with learning languages?
Obviously, given the very social nature of education and human language use, technology could never fully replace a teacher, and so this isn’t really what I’m setting out to do. Where I see technology being able to make an enormous impact, though, is in its ability to automate and scale a lot of the things on the periphery that language learning involves.
As an example, vocabulary is a very important component to being able to use and understand language. Thankfully, we now have the insights from corpus-based methods to help us identify which vocabulary items deserve primary focus, and it’s a fairly straightforward task to create materials including these.
However, what this means in practice is either students need to pay for expensive course books containing materials created with a corpus-informed approach to vocabulary, or the teachers and students themselves need to spend time creating these materials. Course books tend to be very expensive, and even those which come with online materials aren’t updated very frequently. Teachers and students creating their own materials are left to scour the internet for items to then analyze and filter for appropriate vocabulary inclusion, and then beyond that they need to construct materials to target the particular skill areas they would like to use the vocabulary for (eg, writing, listening), and which target the authentic contexts they are interested in, which is a very time-consuming manual process.
Technology has the ability to address both of these concerns (lack of updates and requirements of time). As one example, I created a very simple web app that pulls in content from the writing prompts sub-reddit (https://www.reddit.com/r/WritingPrompts/) and uses it to help students work on identifying appropriate articles (a/an/the) to accompany nouns and noun phrases. The content is accessed in real time when the student is using the application, and given the fast turnover in this particular sub-reddit, this means that using it once a day would incorporate completely different content, essentially forming a completely new set of activities.
One of the other advantages to this approach is the automated feedback available to the user. So in essence, it’s a completely automated system to that uses authentic materials (created largely by native speakers for native speaker consumption) to instantly generate and assess activities focused on one specific learning objective.
The approach does still have its shortcomings, in that this particular system is just finding all the articles and replacing them with a selection drop-down, so it’s only able to give feedback on whether the user’s selection is the same as the original article. Also, since this is a very informal genre, the language used might not be suitable for all ages of users.
3. What are your current projects?
I wish I had more time do work on these, since I currently only have early mornings and commuting time on the train to use for side projects, but there are a few things I’m working on that I’m really excited about.
Now that I have one simple grid-based game up and running (https://www.grammarbuffet.org/rhyme-game/), I’m thinking about how I can re-use that same user interface to target other skills. If, instead of needing to tap on the words that rhyme, we could just have the users say them, that would be a much more authentic way to assess whether the user is able to “do something” with their knowledge of rhymes. There is an HTML5 Speech API that I’ve been meaning to play around with, so that could be a potential way to create an alternate version based on actual speaking skills rather than just reading skills.
Another permutation of the grid-based game template would be integrating word stress instead of rhymes. I’m currently trying to get a good dataset containing word stress information for all the words in the Academic Word List (Coxhead, 2000), which I suppose is a bit dated now as a corpus-based vocabulary list, but it was my first introduction to the power of a corpus approach, and so I’ve always wanted to use it to generate materials on the web. The first version of this will probably also just involve seeing the word and using stress knowledge to tap it, rather than speaking, but I’m also imagining how we could use the capabilities of mobile devices to allow the user to shake or just move their phone up and down to give their answers on word stress. Once that’s up and running it’s very simple to incorporate more modern corpus-based vocabulary lists (eg, the Academic Spoken Words List, 2017). Moreover, since this is all open source, anyone could adapt it for their particular vocabulary needs and deploy a custom web app via tech like Netlify.
Beyond these simple games, I’m also starting to work on a way to take authentic texts (possibly from a more academic genre on reddit like /r/science or text of articles on arXiv) to create cloze test types of materials using the AWL. The user would need to supply the words instead of select, which is a much more authentic assessment of their ability to understand and actually use these words in written English.
4. I really like the idea of offline access, how can people interested in this learn more?
It’s a very relevant concept for our users who either have very unreliable network access, or even relatively expensive network costs. If we’re discussing applications that users engage in every single day, the network access becomes non-trivial, especially if it’s using the old website model of full page reload on every change in the view, rather than a modern single page app, written in either Angular or React. So absolutely, I would say it matters whether modern learning materials are using the latest technology to enable all of these enhancements to traditional webpages.
Much of this movement towards “offline-first” is informed by the JAMstack, which itself is a movement towards static sites that are deployable without any significant backend resources. This speaks to one of the goals of the micromaterials movement, which is the separation of getting that data from actually doing something with it in the web application. One early attempt in terms of setting up a backend API to be consumed is https://micromaterials.org, which just returns sentences from the WritingPrompts subreddit. It’s admittedly very crude (and even written in python 2, yuck!), but shows what could eventually be a model for data services that could feed into front-end micromaterials apps.
5. Ideas/Plans for the future?
These disadvantages are a lot more obvious if this remains one of only a few such applications, but imagine if there were hundreds or even thousands of these forming something much more like an ecosystem. And then extrapolate that further to imagine thousands of backend server-side APIs for each conceivable genre of English enabling a multitiude of frontend applications to consume the data and create materials for different learners. As soon as you have one server-side service providing data on AWL words, that allows any number of web applications to consume and transform that data into activities.
The plan all along was not for me to create all of these applications, but to inspire others to begin creating similar type of micromaterials. It hasn’t yet caught on, and clearly, expecting teachers to take up this kind of development is not sustainable. I’m hoping that other developers see the value in these and join the movement.
In a sense, the sever-side API’s are a bigger prerequisite to getting this whole thing off the ground, so I’m very happy to work with any backend developers on what we need going forward, but I’m also going to continue developing things myself until we have a big enough community to take over.
I think whether all of these micromaterials exist under the umbrella of one single sign-on with tracking and auditing is beyond the scope of where we’re currently at, though I’m imagining a world where users could initiate their journey into the service, take a simple test involving all four of the main skills (reading, writing, speaking, and listening), and then be recommended a slew of micromaterials to help them out.
For some users that might focus more on the reading and writing components, whereas for others that might focus more on the speaking and listening ones. The barrier to this currently being available is not at all significant and just involves getting development time invested in crating the materials. If I had them all created right now, I would be able to deploy them today with modern tooling like Netlify.
The problem is more one of availability and time, and I’m more than happy to work with other developers and teachers to bring this closer to a reality for our students.
Please do read the other posts in the Grassroots language technology series if you have not done so already.