June 20, 2008

If you think sentiment analysis technology can detect idiom, I have a bridge I’d like to sell you

Text mining tools are just WONDERFUL at detecting idiom, sarcasm, and figurative speech … Yeah, right. I asked Lexalytics CEO Jeff Catlin whether his tool could do that kind of thing, and he looked at me like I’d just grown a third ear.

Actually, he didn’t. But just like every other sentiment analysis vendor I encountered at the Text Analytics Summit or spoke to beforehand, he made it clear that his tool could only handle straightforward, literal expressions of opinion. Idiom, irony, sarcasm, metaphor, et al. are beyond the current reach of the technology.

Aren’t you just thrilled that I shared that earth-shattering news with you?

Comments

15 Responses to “If you think sentiment analysis technology can detect idiom, I have a bridge I’d like to sell you”

  1. H on June 20th, 2008 1:53 pm

    Ya right. This article is like a breath of fresh air. Thanks a lot for expanding my mind.

  2. Pete Mancini on June 20th, 2008 3:23 pm

    I agree that sentiment analysis is a hard problem. Even with semantic networks the problem is that you have to have a lot of knowledge in order to detect sarcasm in the first place which is why it tends to not work on the people it is most likely intended for. The problem is furthered by the fact that most of the vendors don’t even have a semantic model and thus are potentially confusing terms in their scoring. “The riot after the Celtics’ win on Tuesday was pretty ugly.” Ok so Pretty (+1) and Ugly (-1) doesn’t really help in the scoring of the riot.

    The Text Analytics Summit was fun but definitely there were a lot of people there that probably learned the market is full of technology that is sometimes hard to grasp. There were also companies pushing stuff that was obviously bad. I won’t name names but one guy got stomped on by three questions before they cut off all Q&A and moved on to the next speaker!

  3. Alan on June 20th, 2008 3:58 pm

    re: __The problem is furthered by the fact that most of the vendors don’t even have a semantic model and thus are potentially confusing terms in their scoring__

    Yep, so maybe what we need to decide is what market factors are preventing the economical use of a good underlying model (I know it’s not a matter of the models existing — I first demonstrated one in 1985 or 1986.)

    Then, we need to decide whether the market will reassume the lemming-like form it assumed in the mid- to late-1980s. If it doesn’t we can fix ths problem; we need to find a new livelihood!

  4. Curt Monash on June 20th, 2008 8:05 pm

    “Pretty” as a modifier of “ugly” rather than “riot” is something I’d guess is already handled today, at least in some products. Attensity, for example, has long had full sentence parsing.

  5. Tim Estes on June 21st, 2008 3:27 pm

    Yes – buy “pretty” being a modifier doesn’t help you here. The key is the scoring of the context as a whole. Its a ranking issue to know if the above sentence is actually giving you a positive or negative sentiment. Pete’s point is really that riot = 0 because of the odd sense of “pretty” in this case.

    I’d doubt that Attensity would do much here (no offense to their offering). Maybe a more subtle algorithm, but its likely the only way to handle it would be to recognize the sense of “pretty” here being an augmentation of the negative influence of “ugly.” Doable only if you could recognize irony – i.e. Wordnet’s 2nd sense of “pretty”.

    If David reads this, maybe he could give it a go and we could all see. 🙂

  6. Curt Monash on June 21st, 2008 4:14 pm

    Tim,

    OK, I see what you’re getting at. If “pretty ugly” can’t be tagged — based on syntax and semantics — as meaning something quite different from “beautiful, ugly”, all bets are off.

    And the syntactic clues are indeed — well, they’re PRETTY slim.

    CAM

  7. Tim Estes on June 24th, 2008 12:35 am

    Touche. Sometimes puns really do land.

  8. David Bean on June 24th, 2008 1:55 pm

    Hi guys,

    Yah, I’ve got a couple of responses for you ;-). First, I completely agree with you on detecting idiomatic usage…that’s beyond state-of-the-art, at least as I understand it. In fact, at last year’s TA Summit, I spoke directly to this issue. I had a slide in my preso about sentiment analysis titled “Reality Check” with examples of content that can’t be handled:

    Sarcasm – “You really know how to make a customer feel appreciated, don’t you?”

    Sarcasm with Tone of Voice – “Oh….that makes me sooooo happy.”

    Metaphor – “I’m as happy as a turkey on November 24th”

    Idioms – “I’m just like a bug in a rug.” “Happy as a clam”

    Of those, I actually think idiomatic usage may be the more addressable since you could enumerate a slew of idioms and a simple lexical match would suffice for many of them.

    But in general, this sort of thing is just beyond our reach, and by “our” I mean the field.

    More in a bit,

    – David

  9. David Bean on June 24th, 2008 6:31 pm

    On the second issue – semantic understanding and the sample sentence…

    “The riot after the Celtics’ win on Tuesday was pretty ugly.”

    Since we’d parse that, we’d get something like this bracketed form to work with:

    [[The riot]np/subj [after [the Celtics’ win]np]pp [on Tuesday]advp [was]vp [pretty ugly]adjp ]clause

    We’d also recognize that pretty is a modifier of ugly. (btw, I’ve left out things like POS tags, entity idenfication, semantic class tags, etc.)

    From this kind of syntactic analysis, we could perform a number of extraction processes to turn the parse into something more abstract and recognizable to an analyst. In most cases, we’d map the issue of interest to something that looks like this:

    win : ugly [more]

    The term before the colon is a thing, the term after the colon represents an action performed on that thing or a characteristic or quality of that thing. In this case, the win was ugly. In addition, because we’ve mapped “pretty” into a collection of terms that augment head adjectives, we’d represent this as [more] to distinguish it from a simple case of “the win was ugly.” We call this nuance in expression “voicing” and we use it to pick up on augmentation/diminishment of adjectives, plus negation, recurrence, conditionality, and a bunch of other stuff on verb phrases.

    Now, to the semantic model question. We try to do a fair amount of semantic disambiguation by virtue of parsing into thematic roles, i.e. getting at actors, actions, recipients, instruments, etc., at a level above the syntax. That’s what’s letting us understand that “pretty” augments “ugly” in the example above. At a whole ‘nother level, what exactly an “ugly win” means in a larger context — is that a sports-related victory that involved a lot of poor play or is that a political race victory that included lots of negative advertising — requires a ton of real world knowledge, and that’s always been a bug-a-boo in the AI world. I’ve seen some taxonomy-based search engines that could distinguish word senses like that, but tying the correct notion of an “ugly win” to a larger understanding of the world gets, yep, ugly, really fast.

    I’ve seen a number of government uses where extracted data is used to trigger ontologies and reasoning over predefined concepts and relationships, and that’s probably approaching the level of semantic modeling you’d need to detect sarcasm, but it would still be difficult to distinguish why some piece of data didn’t fit the model — was it due to sarcasm, or was it incorrectly extracted data, or was it new knowledge that the ontology is missing? I know enough about ontologies and automated reasoning to be just a little bit more than dangerous, so there may be better answers out there.

    – David

  10. Chris Riopel on July 1st, 2008 8:07 pm

    David,

    I think you meant to say that the parse result was

    riot : ugly

    Unless you were being brutally honest (above and beyond, really) about how your software might have incorrectly parsed it…

    Chris

  11. Tim Estes on July 6th, 2008 3:12 pm

    David,

    Thanks for that explication. It was quite informative and honest in stating with clarity the strength and limitations of what can be done right now. Its quite an impressive bit of engineering to handle subtleties such as that on the syntactic/role level with that kind of potential ambiguity. The “voicing” piece is particularly cool.

    As for the comments on the semantic model problem… that is one way to handle it – at least with a schematic bias. Of course, there is another approach that might look at it more as a rich model of features with particular expectations such that a use in the way described is novel and suggests that the representation used (i.e. the word) is actually not representative of the expected underlying idea.

    So Chris… how would Inxight/BO/SAP handle that? You know you asked for that question. 😉

    -Tim

  12. Kirk Daly on July 10th, 2008 8:37 am

    Hi all,

    The comment I would make is that yes, while irony, sarcasm etc. are beyond most technologies today (primarily in my view because they require a degree of real world knowledge that pushes processing times beyond what is acceptable), that doesn’t undermine the massive contribution sentiment analysis software can make to certain business applications.

    A good example of this would be the deployment of Infonic’s Sentiment Analysis technology within Thomson Reuters’ NewsScope product. Trading desks receive a large volume of market news in a very standard format, typically delivered from the journalist without irony or other such “difficult” language.

    While in individual instances Sentiment’s linguistic algorithms may well be defeated by the language used, on average the technology does a proven job of extracting the sentiment of the coverage as it relates to the entities mentioned. This enables accurate real time tracking of the average sentiment of the news flow pertaining to specific stocks – with the obvious application to automated trading.

    It is not necessary to take our word on this. Sceptics need only look to the large trading banks who are purchasing the software in increasing numbers for use within their algorithmic trading systems. They are doing this having rigorously tested the software and proven to their own satisfaction the correlation of movements in our average sentiment score with movements in the stock price.

    To take on the specific “… riot after…” example discussed, Sentiment would treat “pretty” as an intensifier of “ugly (riot)” but handle “beautiful, ugly” quite differently. From what David has said above I think both Infonic and Attensity would think that all bets are very much ON…

    – Kirk

  13. jane on August 11th, 2008 10:54 am

    It should be recognized two different aspect of idioms sentiment analysis ( or any text anaysis).
    Real world mirroring or usage and labs unrealistically created. Vast majority of population use fairly common idioms, phrases etc. not very complex and not very tricky…problem is taht once it get to testing the software by “so called exterts” it gets to extra – unrealistic phrases that are barely used in real world.
    So idom detection should not be hard if tried to mimick reality .

  14. Curt Monash on August 12th, 2008 4:30 pm

    Jane,

    I think it’s harder than you’re suggesting. For one thing, idioms change with fashion. For another, idioms don’t all have fixed forms. E.g., I’ve seen “Pot: Meet kettle” and “pot-kettle-black” both used fairly often to refer to “the pot is calling the kettle black”. And I’ve been known to post about “the relative colors of cookware”.

    CAM

  15. The rise of machine-written journalism | Fullrunner on January 3rd, 2011 9:36 am

    […] the way, of course, intelligent systems will need to start coping with the complexities of human language have so far confounded them, including idiom, metaphor and […]

Leave a Reply




Feed including blog about text analytics, text mining, and text search Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.