1 / 53

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text. Dr Dale Chant, Red Centre Software Pty Ltd. ASC Conference: Making Sense of New Research Technologies Critical Reflections on Methodology and Technology:

tirzah
Download Presentation

Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exposing and Quantifying Narrative and Thematic Structures in Well-formed and Ill-formed Text Dr Dale Chant, Red Centre Software Pty Ltd ASC Conference: Making Sense of New Research Technologies Critical Reflections on Methodology and Technology: Gamification, Text Analysis, and Data Visualisation Friday 6th and Saturday 7th September 2013, University of Winchester

  2. The Problem • Coding open-ended verbatims takes a long time • Inconsistent coding judgements can wreak havoc on small weekly samples • Some bodies of free text, such as Twitter feeds, are beyond human capacity to digest due to sheer volume • Machine coding by string matching assumes well-formed text – variant morphologies difficult to accommodate

  3. Naïve Auto-Coding • Read all source words (or complete strings) into an array • Sort alphabetically • Assign codes from 1 to N, where N is the number of unique words (or unique strings) • Write the assigned codes in the original word (or string) order

  4. Naïve Auto-Coding • Well-formed published text is code-complete The first line of Wuthering Heights The complete code frame has 9,201 items

  5. Netting to a Theme • With the code frame defined, themes can be netted from individual words abandonment = abandon/abandoned/abandonment/reject/rejected/rejecting Coded Decoded Theme(1) = Text(3/5,6530/6532)

  6. Code Incomplete • Open-ended tracker Brand Awareness questions, time-dependent blog or social media exchanges • Can never be code-complete, because forthcoming data may throw up unanticipated variations Dog/dogs/mongrel/mongrels/mutt/mutts/dingo/wolf/…

  7. Damerau-Levenshtein • One approach is Approximate String Matching • Match a source string to a target string by combinations of i) Insert ii) Delete iii) Replace iv) Transpose • The edit distance is the number of transforms needed to get from the source to the target

  8. The Algorithm in Action • There is an interactive implementation of Damerau-Levenshtein at http://fuzzy-string.com/Compare/

  9. Scaling the Algorithm (1) • To be useful, the allowable distance for a positive match needs to scale against the length of the target strings • ‘ox’ to ‘fox’ has distance 1 (insert at head)This would be a false positive • ‘megalomania’ to ‘megalomaniacs’ has distance 2 (insert twice at tail)This is a good match

  10. Scaling the Algorithm (2) • Short strings need a distance of zero • Intermediate strings need 1 or 2 • Longer strings can bear 2 or 3 or more • The thresholds for short/intermediate/long and allowed distances for a positive match are here termed the fuzz parameters • Fuzz parameters are determined empirically, and will vary with the body of text being analysed.

  11. What is Gained The target string megalomaniac at an edit distance of 1 will match on:12 * 26 in situ typos (negalomaniac) + 12 missing (megaomaniac) + 12*26 extraneous (megaloomaniac) + 11 transpositions (meglaomaniac) + 2*26 extra pre/post character (mmegalomaniac) = 699 possible variations

  12. The Procedure • Code the source text, one code per unique word • Run a sorted frequency count to exposerecurrent themes and concepts • Review actual instances of these words in situ to determine appropriate fuzz parameters and the thematic and conceptual contexts • Devise a compact target code frame which maps the themes and concepts words of interest to synonym and variant lists • Process the source text against the targets, to create a categorical variable which can be tabulated in the normal manner against any other variable

  13. Exposure and Quantification: Romeo and Juliet • Since this text is bounded, code-complete and well-formed, the fuzz parameters can all be set to zero • The Exposure step reveals dominance for i) love and related ii) misery and despair iii) conflict and death

  14. Love dominates, then diminishes Romeo Romeo, wherefore art thou Romeo?

  15. To be replaced by misery and despair

  16. As Percents (share of scene)

  17. Interlaced with Conflict and Death Near mathematical symmetry:

  18. Wuthering Heights

  19. Ill-formed Text • Tweets on Australian Federal Politics • From 1 June 2013 to 31 July 2013 • Search term: #auspol OR #auspoll OR #ausvotes OR #ozcot • 927,190 cases • Average between 10 to 20,000 per day • Huge spike on 26 June

  20. http://votecompass.com/2013/07/25/are-you-among-australias-most-influential-political-tweeters-votecompass-maps-the-auspol-twittersphere/http://votecompass.com/2013/07/25/are-you-among-australias-most-influential-political-tweeters-votecompass-maps-the-auspol-twittersphere/

  21. Tweet Frequencies

  22. Data Sources • Two commercial data source providers were used: Gnip and ScraperWiki • The Gnip data was collected in a single 28 hour run conducted on 15 Aug 2013 • ScraperWiki provides user-initiated searches for up to the prior seven days • Because ScraperWiki is near real time, accounts later banned or suspended by 15th August, and hence not in the Gnip data, remain present • The ScraperWiki data is used below only to demonstrate this point. http://gnip.com/ https://scraperwiki.com/

  23. Australian Federal Politics since 2007 Abbott, Conservative Howard, Conservative Rudd, Labor Gillard, Labor Timeline: ‘07 ‘08 ‘09 ‘10 ‘11 ‘12 ‘13 Rudd challenges and defeats Gillard, calls election for 7 Sept Gillard challenges and defeats Rudd, calls election, hung parliament Rudd defeats Howard at general election

  24. The Grand Narrative • With the Pretender to the Throne (Gillard) summarily dispatched • The True and Rightful King (Rudd), triumphantly returned from (backbench) exile • Now faces the Great Adversary (Abbott) in a battle to the (political) death for control of the realm Warning: Aussie Vernacular Alert Assange Senate Bid

  25. Rudd’s Major Problem

  26. The Conservative’s Hammer

  27. Distorting the Message (1) Attack of the TweetBots

  28. @ALPDirt Posts Once a Minute

  29. Distorting the Message (2) The Scheduled Automatons

  30. Obsessive / Compulsives

  31. HashTags • Much more than just a metatag • Function as a message tokens too: • Commentary on current affairs (#1000BoatDeaths, #20000JobCuts) • Calls to action (#2013electiondateplease, #AbolishParliament) • Political attack (#AbbotLies) • Take a position (#AgeOfEntitlement) • Make a joke or pun (#calmdownbirdie, #fraudband)

  32. 33,206 Unique HashTags over June/July

  33. Colour Masked to Highlight the Clumps

  34. Zoom-in on BattleRort

  35. Hashtag Spawn Quantification should capture as many variants as possible.

  36. Sort on 19 July, Zoom The PNG Solution is more punitive than anything the Conservatives have tried

  37. But many instances missed Three dominant tags are clear, but the variants will be lost under a search on just asylum OR asylumseeker/s Ditto Refugee/Refugees, etc.

  38. Quantification • Smoothed percentage chart of all instances exposes the narratives, but to quantify them accurately, we cannot forego counting the variants. • To get a more precise read, we apply Damerau-Levenshtein. • Recalling the four transformation rules (insert, delete, replace, transpose), the following matches (among many others) will be made to the dominant forms at run time: • battelrort->battlerort (transpose once) • calmdownbirdie->calmdownbridie (transpose once) • asylumseeke ->asylumseekers (insert twice) • asylumseekeers->asylumseekers (delete once) • asylymseekers->asylumseekers (replace once)

  39. Prepare the synonym/variants lists for the dominant tags • The procedure is: • Code the hashtags, one code per unique tag • Generate a sorted frequency count table • Choose a cut-off point - I have used 30 • Review all items > 30, define and initialise a coded synonym/variants list with the dominant tags • Sort the table alphabetically by label • Review label blocks for any variants which are too coarse for Damerau-Levenshtein, and add to the relevant synonym/variants target list

  40. Define Coded Categories against Targets

  41. Confirm it Works • Set fuzz parameters as distance=0 for strings 4 characters or less, distance=1 for 9 characters or less, and distance=2 for 10 characters or more • Run the source hashtags against these targets to create a new variable comprising eight categorical codes • To confirm, run a table of the eight coded categories against the original raw hashtag text The source strings battelrort and calmdownbirdie are both correctly captured and coded.

  42. Doing Likewise for the Tweet Text etc to code 46

  43. Continued to Code 46

  44. Share of Voice All synonym and variant matches for Rudd, Abbott, Gillard, as percentages of the sum of their total mentions per day:

  45. Compare June to July

  46. Tweet Categories by Day

  47. Rudd vs Abbott – Image Attributes

  48. Corruption Share – June vs July

  49. Compared to Topsy Sentiment Score http://www.couriermail.com.au/news/special-features/ruddeffect-on-the-wane-as-abbott-retains-the-people8217s-trust/story-fnho52jo-1226683181964 Not much agreement here. Who is right?

  50. Performance • Machine: Standard business Dell laptop, dual core, 4 gig RAM, nothing fancy, no accelerations • The bottleneck is the Damerau-Levenshtein step on the tweet text, which for the above 46 categories over 113 meg of plain text, takes about 15 hours • Performance is linear to the number of individual target synonyms/variants • Damerau-Levenshtein on the hashtags, a much smaller set of targets, completes in about 20 minutes • The major time commitment from a human is in devising the target synonym and variants lists, here several hours • For more routine applications of the technique, such as open-ended brand lists, preparing the target lists is trivial [end of document]

More Related