This post describes how to set up a workflow using two programs to build up a database of text from the internet. The two programs used are TextSTAT and AntConc. TextSTAT is used for its webcrawler to build your corpus [update1: an alternative program ICEweb, update 2: BootCat custom url] and AntConc is used to analyse the corpus. Note that I won’t be detailing any analysis in this post, that needs a series of other posts.
The first program TextSTAT is used to bring in your text from the internet. When you first run TextSTAT you will get the screen below:
First create a new corpus by going to Corpus/New corpus.
Next go to Corpus/Add file from web or press this icon:
to get the following dialogue box:
The above screenshot shows that I have entered the directory for posts from February 2011 on the webmonkey.com site i.e.
http://www.webmonkey.com/2011/01/. You need to have a good idea of how your site is structured in terms of how it archives its text.
I then select 25 as the number of pages to retrieve (you need to be aware of how many posts your particular website averages). I finally select domain: only subdirectory. Once you press search you should get a window like below:
Note that links such as
http://www.webmonkey.com/2011/01/ and http://www.webmonkey.com/2011/01/page/2/ etc are indexes to the texts and so can be deleted by first selecting them:
And then right clicking:
and finally selecting remove files and ending up with the following screen:
Next switch to the Word Forms tab:
Press the Frequency List button and you will get a screen like this:
Once you save your recently made corpus, you need to change the file into a text file by simply renaming the .crp file into a .txt file. A good idea is to make a copy of the .crp file before renaming so that you can still use original file in TextSTAT.
The start screen of AntConc looks like this:
Go to File/Open File and you will get a screen like this:
You can see the .txt file you renamed from the file you built in TextSTAT in the top left of the first column. You may notice a difference in the wordlist that AntCONC builds compared to the wordlist TextSTAT builds:
This is because the renaming of the .crp file to .txt file is a dirty workaround which does not strip any headers that TextSTAT uses in its .crp file. The differences should be negligible for ESP purposes.
Once you have your txt file in AntConc it is just a matter of playing around to see how you can use it for your purposes. Use the AntConc website as a jump point to learn about the features. Stay tuned for a series of posts on how I use it for multimedia English classes.