》》 Multi-Document Summarization of Evaluative Text


GENERAL

ABSTRACT & CONCLUSION

  • compare two approaches of summarization
    • sentence extraction based approach MEAD (an open source package)
    • language generation based approach SEA
  • conclusion
    • both perform equally well quantitatively
      • MEAD: varied language and details but lack in accuracy, fail to give an overview
      • SEA: provide general overview but sounding 'robotic', repetitive and incoherent (rời rạc, ko mạch lạc)
    • both perform different but for complementary reasons
    • should synthesize two approaches

INTRODUCTION

  • INDUSTRIAL NEEDS

    • Online customer reviews, summaries of this literature could be of great strategic value to product designers, planners and manufactures
    • Other important commercial applications such as summarization of travel logs
    • non-commercial applications such as the summarization of candidate reviews
  • PROBLEM

    • how to effectively summarize a large corpora of evaluative text about a single entity e.g. a product
    • for factual documents, the goal is to extract important facts and present them in a sensible ordering while avoiding repetition
    • when documents contain inconsistent info e.g. conflicting report, the goal is to identify overlaps and inconsistencies and produce a summary that points out and explain those inconsistencies.