• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Validating Estimates of Latent Traits from Textual Data Using Human Judgment as a Benchmark. (2013)

by Will Lowe, Kenneth Benoit
Venue:Political Analaysis,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 10

Life of Brian Revisited: Assessing Informational and Non-Informational Leadership Tools

by Alexander Baturo, Slava Mikhaylov
"... Recent literature models leadership as the process of communication where the rhetorical signals of leaders facilitate followers ’ coordination. While studies exist that explored the effects of leadership in experimental settings, there remains a lack of empirical research on the effectiveness of in ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Recent literature models leadership as the process of communication where the rhetorical signals of leaders facilitate followers ’ coordination. While studies exist that explored the effects of leadership in experimental settings, there remains a lack of empirical research on the effectiveness of informational tools in real political environments. Using quantitative text analysis of federal and subnational legislative addresses in Russia, this paper empirically demonstrates that followers react to informational signals from leaders. We further theorize that leaders employ a combination of informational and non-informational tools in order to solve the coordination problem. The findings show that strategic calculus of followers is determined by a mixture of informational and noninformational tools. Ignoring the non-informational tools and particularly the interrelationship between the latter and the informational tools can threaten internal validity of causal inference in the analysis of leadership effects on coordination.

ISSN 1929-7750 (online). The Journal of Learning Analytics works under a Creative Commons License, Attribution- NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) Computer-Assisted Reading and Discovery for Student Generated Text in Massive Open Online

by unknown authors
"... Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to ..."
Abstract - Add to MetaCart
Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to the learning analytics community the Structural Topic Model, an approach to language processing that can (1) find syntactic patterns with semantic meaning in unstructured text, (2) identify variation in those patterns across covariates, and (3) uncover archetypal texts that exemplify the documents within a topical pattern. We show examples of computationally-aided discovery and reading in three MOOC settings: mapping students ’ self-reported motivations, identifying themes in discussion forums, and uncovering patterns of feedback in course evaluations.
(Show Context)

Citation Context

...the natural improvements in speed, the ability to process highsvolumes of text, and the consistency of treatment of all parts of the corpus (Grimmer & King, 2011;sHillard, Purpura, & Wilkerson, 2008; =-=Lowe & Benoit, 2013-=-). Humans often struggle with thesdevelopment of complicated coding schemes (Quinn, Monroe, Colaresi, Crespin, & Radev, 2010), andsthere is some experimental evidence to suggest that humans judge clus...

unknown title

by unknown authors
"... Abstract Comparative politics scholars are well poised to take advantage of recent advances in research designs and research tools for the systematic analysis of textual data. This paper provides the first focused discussion of these advances for scholars of comparative politics, though many argume ..."
Abstract - Add to MetaCart
Abstract Comparative politics scholars are well poised to take advantage of recent advances in research designs and research tools for the systematic analysis of textual data. This paper provides the first focused discussion of these advances for scholars of comparative politics, though many arguments are applicable across political science sub-fields. With the explosion of textual data in countries around the world, it is important for comparativists to stay at the cutting edge. We situate recent and existing tools within a broader framework of methods to process, manage, and analyze textual data. While we review a variety of analysis tools of interest, we particularly focus on methods that take into account information about when and who generated a particular piece of text. We also engage with more pragmatic considerations about the ability to process large volumes of text that come in multiple languages. All of our discussions are illustrated with existing, and several new, software implementations. 0

unknown title

by unknown authors
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ers: http://www.kokusyo.jp/wp-content/uploads/2015/10/MDK151006b.pdf (access date: 2015/11/30) 6 http://www.world-nuclear.org/info/Facts-and-Figures/Nuclear-generation-by-country/ (access date: 2015/09/17) evidence of international pressure (gaiatsu), which we later call the “reverse Fukushima Effect” channeled through mass media. The critically assessed subjectivity regarding this method and the problems of assumption-generation on text-data in order to identify latent traits and evaluate their “usefulness” in measuring their “real quantities”, our method is validated through the findings by Lowe and Benoit (2013), who validated human judgment as a benchmark for qualitative content analysis of political text-data, in terms of “semantic validity” and that the quantity being scaled from qualitative and sentiment text analyses reflects the quantity that was intended to be measured. While using tools within the analytical program NVivo 10, designed for qualitative research, we performed a sentiment analysis through an attribute value matrix query based on our coded content. For this, it was necessary to define attribute values to the data. These attribute values basically consist of elements of a coding sh...

2F Quantitative Text Analysis

by Kenneth Benoit , 2013
"... The course surveys methods for systematically extracting quantitative information from text for social scientific purposes, starting with classical content analysis and dictionary-based methods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantitie ..."
Abstract - Add to MetaCart
The course surveys methods for systematically extracting quantitative information from text for social scientific purposes, starting with classical content analysis and dictionary-based methods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantities from text using statistical techniques. The course lays a theoretical foundation for text analysis but mainly takes a very practical and applied approach, so that students learn how to apply these methods in actual research. The common focus across all methods is that they can be reduced to a three-step process: first, identifying texts and units of texts for analysis; second, extracting from the texts quantitatively measured features—such as coded content categories, word counts, word types, dictionary counts, or parts of speech—and converting these into a quantitative matrix; and third, using quantitative or statistical methods to analyse this matrix in order to generate inferences about the texts or their authors. The course systematically surveys these methods in a logical progression, with a very practical hands-on approach where each technique will be applied in lab sessions using appropriate software, on real texts. Objectives This course is aimed to provide a practical foundation to and a working knowledge of the main applied techniques of quantitative text analysis for social science research. The course covers many fundamental issues in quantitative text analysis such as inter-coder agreement, reliability, validation, accuracy, and precision. It also surveys the main techniques such as human coding (classical content analysis), dictionary approaches, classification methods, and scaling models. It also includes systematic consideration of published applications and examples of these methods, from a variety of disciplinary and applied fields, including political science, economics, sociology, media and communications, marketing, finance, social policy, and health policy. Lessons will consist of a mixture of theoretical grounding in content analysis approaches and techniques, with hands on analysis of real texts using content analytic and statistical software.

Text Analysis: Estimating Policy Preferences From Written and Spoken Words∗

by Kenneth Benoit, Alexander Herzog , 2015
"... This chapter provides an introduction into the emerging field of quantitative text analysis. Almost every aspect of the policy-making process involves some form of ver-bal or written communication. This communication is increasingly made available in electronic format, which requires new tools and m ..."
Abstract - Add to MetaCart
This chapter provides an introduction into the emerging field of quantitative text analysis. Almost every aspect of the policy-making process involves some form of ver-bal or written communication. This communication is increasingly made available in electronic format, which requires new tools and methods to analyze large amounts of textual data. We begin with a general discussion of the method and its place in public policy analysis, including a brief review of existing applications in political science. We then discuss typical challenges that readers encounter when working with political texts. This includes differences in file formats, the definition of “documents ” for an-alytical purposes, word and feature selection, and the transformation of unstructured data into a document-feature matrix. We will also discuss typical pre-processing steps that are made when working with text. Finally, in the third section of the chapter, we demonstrate the application of text analysis to measure individual legislators ’ policy preferences from annual budget debates in Ireland.
(Show Context)

Citation Context

...re party preferences in German elections (Slapin and Proksch, 2008), European interest group statements (Klüver, 2009), the European Parliament (Proksch and Slapin, 2010), and Irish budget speeches (=-=Lowe and Benoit, 2013-=-). Classification approaches use mainly unsupervised methods adapted from computer sci3 ence for topic discovery and for estimating the content and fluctuations in the discussions over policy. Since t...

The Quantitative Analysis of Textual Data Autumn 2014

by Kenneth Benoit , 2014
"... The course surveys methods for systematically extracting quantitative information from political text for social scientific purposes, starting with classical content analysis and dictionary-based meth-ods, to classification methods, and state-of-the-art scaling methods and topic models for estimatin ..."
Abstract - Add to MetaCart
The course surveys methods for systematically extracting quantitative information from political text for social scientific purposes, starting with classical content analysis and dictionary-based meth-ods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantities from text using statistical techniques. The course lays a theoretical foundation for text analysis but mainly takes a very practical and applied approach, so that students learn how to apply these methods in actual research. The common focus across all methods is that they can be reduced to a three-step process: first, identifying texts and units of texts for analysis; second, extracting from the texts quantitatively measured features—such as coded content categories, word counts, word types, dictionary counts, or parts of speech—and converting these into a quantitative ma-trix; and third, using quantitative or statistical methods to analyse this matrix in order to generate inferences about the texts or their authors. The course systematically surveys these methods in a logical progression, with a practical, hands-on approach where each technique will be applied using appropriate software to real texts. Objectives The course is also designed to cover many fundamental issues in quantitative text analysis such

Votes∗

by Daniel Schwarz, Denise Traber, Kenneth Benoit, Daniel Schwarz, Denise Traber, Kenneth Benoit , 2014
"... School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engag ..."
Abstract - Add to MetaCart
School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain. You may freely distribute the
(Show Context)

Citation Context

...es to estimate MP positions in other parliaments, such as pro- and anti-EU positioning in the European Parliament (Proksch and Slapin, 2010) and to preferences for austerity in Irish budget speeches (=-=Benoit and Lowe, 2013-=-). The advantage of the Poisson scaling method is that as an unsupervised method, it requires no “training” step or identification of known positions. Furthermore, its method closely matches that of t...

Government in Crisis: Opening the “Black Box ” of Intra-Cabinet Competition Over Budgetary Allocation∗

by Alexander Herzog, Slava Mikhaylov , 2014
"... With the onset of the current economic and financial crisis in Europe, questions about the power of core executives to control fiscal outcomes are more important than ever. Why are some governments more effective in controlling spending while others fall prey to exces-sive overspending by individual ..."
Abstract - Add to MetaCart
With the onset of the current economic and financial crisis in Europe, questions about the power of core executives to control fiscal outcomes are more important than ever. Why are some governments more effective in controlling spending while others fall prey to exces-sive overspending by individual cabinet ministers? We approach this question by opening the “black box ” of intra-cabinet decision-making. Using individual cabinet member’s contri-butions to budget debates in Ireland, we estimate their positions on a latent dimension that represents their relative levels of support or opposition to the cabinet leadership. We find that ministers who are close to the finance minister receive a larger budget share, but under worsen-ing macro-economic conditions closeness to the prime minister is a better predictor for budget allocations. Our results, therefore, show that the effectiveness of delegating fiscal authority crucially depends on the economic environment. Key Words: Intra-cabinet bargaining, budgetary politics, fiscal governance, text analysis ∗Author names are listed in alphabetical order. Authors have contributed equally to all work. 1
(Show Context)

Citation Context

...rnment rather than those of specific spending departments (Von Hagen & Harden, 1995, 774). This assumption has been made in a recent applied analysis of the politics of budgetary redistribution (e.g. =-=Lowe & Benoit, 2013-=-). We argue that this assumption severely misrepresents intra-cabinet politics and conflict over the redistribution of financial resources. Wildavsky & Caiden (2004) describe budgets as struggles for ...

Putting Text in Context: How to Estimate Better Left-Right Positions by Scaling Party Manifesto Data using Item Response Theory∗

by Kenneth Benoit , 2014
"... For over three decades, party manifestos have formed the largest source of textual data for estimating party policy positions and emphases, resting on the pillars of two key assump-tions: that party policy positions can be measured on known dimensions by counting text units in predefined categories, ..."
Abstract - Add to MetaCart
For over three decades, party manifestos have formed the largest source of textual data for estimating party policy positions and emphases, resting on the pillars of two key assump-tions: that party policy positions can be measured on known dimensions by counting text units in predefined categories, and that more text in a given category indicates stronger emphasis. Here we revisit the inductive approach to estimating policy positions from party manifesto data, demonstrating that there is no single definition of left-right policy that fits well in all contexts, even though meaningful comparisons can be made by locating parties on a single dimension in each context. To estimate party positions, we apply a Bayesian, multi-level, Poisson-IRT measurement model to category counts from coded party mani-festos. By treating the categories as “items ” and policy positions as a latent variable, we are able to recover not only left-right estimates but also direct estimates of how each policy category relates to this dimension, without having to decide these relationships in advance based on political theory, exploratory analysis, or guesswork. Finally, the flexibility of our framework permits numerous extensions, designed to incorporate models of manifesto au-thorship, coding effects, and additional explanatory variables (including time and country effects) to improve estimates.
(Show Context)

Citation Context

...trates one of the significant consequences of using the restrictive and unrealistic variance assumption of the Poisson model, which leads to significantly underestimated parameter uncertainty in θ̂i (=-=Lowe and Benoit, 2013-=-). 4 Left-right as a super-issue in different contexts 4.1 The policy components of the left-right dimension For proponents of the “deductive” approach to measuring political spaces, the authoritative...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University