Results 1 - 10
of
10
Life of Brian Revisited: Assessing Informational and Non-Informational Leadership Tools
"... Recent literature models leadership as the process of communication where the rhetorical signals of leaders facilitate followers ’ coordination. While studies exist that explored the effects of leadership in experimental settings, there remains a lack of empirical research on the effectiveness of in ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Recent literature models leadership as the process of communication where the rhetorical signals of leaders facilitate followers ’ coordination. While studies exist that explored the effects of leadership in experimental settings, there remains a lack of empirical research on the effectiveness of informational tools in real political environments. Using quantitative text analysis of federal and subnational legislative addresses in Russia, this paper empirically demonstrates that followers react to informational signals from leaders. We further theorize that leaders employ a combination of informational and non-informational tools in order to solve the coordination problem. The findings show that strategic calculus of followers is determined by a mixture of informational and noninformational tools. Ignoring the non-informational tools and particularly the interrelationship between the latter and the informational tools can threaten internal validity of causal inference in the analysis of leadership effects on coordination.
ISSN 1929-7750 (online). The Journal of Learning Analytics works under a Creative Commons License, Attribution- NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) Computer-Assisted Reading and Discovery for Student Generated Text in Massive Open Online
"... Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to ..."
Abstract
- Add to MetaCart
(Show Context)
Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to the learning analytics community the Structural Topic Model, an approach to language processing that can (1) find syntactic patterns with semantic meaning in unstructured text, (2) identify variation in those patterns across covariates, and (3) uncover archetypal texts that exemplify the documents within a topical pattern. We show examples of computationally-aided discovery and reading in three MOOC settings: mapping students ’ self-reported motivations, identifying themes in discussion forums, and uncovering patterns of feedback in course evaluations.
unknown title
"... Abstract Comparative politics scholars are well poised to take advantage of recent advances in research designs and research tools for the systematic analysis of textual data. This paper provides the first focused discussion of these advances for scholars of comparative politics, though many argume ..."
Abstract
- Add to MetaCart
Abstract Comparative politics scholars are well poised to take advantage of recent advances in research designs and research tools for the systematic analysis of textual data. This paper provides the first focused discussion of these advances for scholars of comparative politics, though many arguments are applicable across political science sub-fields. With the explosion of textual data in countries around the world, it is important for comparativists to stay at the cutting edge. We situate recent and existing tools within a broader framework of methods to process, manage, and analyze textual data. While we review a variety of analysis tools of interest, we particularly focus on methods that take into account information about when and who generated a particular piece of text. We also engage with more pragmatic considerations about the ability to process large volumes of text that come in multiple languages. All of our discussions are illustrated with existing, and several new, software implementations. 0
2F Quantitative Text Analysis
, 2013
"... The course surveys methods for systematically extracting quantitative information from text for social scientific purposes, starting with classical content analysis and dictionary-based methods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantitie ..."
Abstract
- Add to MetaCart
The course surveys methods for systematically extracting quantitative information from text for social scientific purposes, starting with classical content analysis and dictionary-based methods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantities from text using statistical techniques. The course lays a theoretical foundation for text analysis but mainly takes a very practical and applied approach, so that students learn how to apply these methods in actual research. The common focus across all methods is that they can be reduced to a three-step process: first, identifying texts and units of texts for analysis; second, extracting from the texts quantitatively measured features—such as coded content categories, word counts, word types, dictionary counts, or parts of speech—and converting these into a quantitative matrix; and third, using quantitative or statistical methods to analyse this matrix in order to generate inferences about the texts or their authors. The course systematically surveys these methods in a logical progression, with a very practical hands-on approach where each technique will be applied in lab sessions using appropriate software, on real texts. Objectives This course is aimed to provide a practical foundation to and a working knowledge of the main applied techniques of quantitative text analysis for social science research. The course covers many fundamental issues in quantitative text analysis such as inter-coder agreement, reliability, validation, accuracy, and precision. It also surveys the main techniques such as human coding (classical content analysis), dictionary approaches, classification methods, and scaling models. It also includes systematic consideration of published applications and examples of these methods, from a variety of disciplinary and applied fields, including political science, economics, sociology, media and communications, marketing, finance, social policy, and health policy. Lessons will consist of a mixture of theoretical grounding in content analysis approaches and techniques, with hands on analysis of real texts using content analytic and statistical software.
Text Analysis: Estimating Policy Preferences From Written and Spoken Words∗
, 2015
"... This chapter provides an introduction into the emerging field of quantitative text analysis. Almost every aspect of the policy-making process involves some form of ver-bal or written communication. This communication is increasingly made available in electronic format, which requires new tools and m ..."
Abstract
- Add to MetaCart
(Show Context)
This chapter provides an introduction into the emerging field of quantitative text analysis. Almost every aspect of the policy-making process involves some form of ver-bal or written communication. This communication is increasingly made available in electronic format, which requires new tools and methods to analyze large amounts of textual data. We begin with a general discussion of the method and its place in public policy analysis, including a brief review of existing applications in political science. We then discuss typical challenges that readers encounter when working with political texts. This includes differences in file formats, the definition of “documents ” for an-alytical purposes, word and feature selection, and the transformation of unstructured data into a document-feature matrix. We will also discuss typical pre-processing steps that are made when working with text. Finally, in the third section of the chapter, we demonstrate the application of text analysis to measure individual legislators ’ policy preferences from annual budget debates in Ireland.
The Quantitative Analysis of Textual Data Autumn 2014
, 2014
"... The course surveys methods for systematically extracting quantitative information from political text for social scientific purposes, starting with classical content analysis and dictionary-based meth-ods, to classification methods, and state-of-the-art scaling methods and topic models for estimatin ..."
Abstract
- Add to MetaCart
The course surveys methods for systematically extracting quantitative information from political text for social scientific purposes, starting with classical content analysis and dictionary-based meth-ods, to classification methods, and state-of-the-art scaling methods and topic models for estimating quantities from text using statistical techniques. The course lays a theoretical foundation for text analysis but mainly takes a very practical and applied approach, so that students learn how to apply these methods in actual research. The common focus across all methods is that they can be reduced to a three-step process: first, identifying texts and units of texts for analysis; second, extracting from the texts quantitatively measured features—such as coded content categories, word counts, word types, dictionary counts, or parts of speech—and converting these into a quantitative ma-trix; and third, using quantitative or statistical methods to analyse this matrix in order to generate inferences about the texts or their authors. The course systematically surveys these methods in a logical progression, with a practical, hands-on approach where each technique will be applied using appropriate software to real texts. Objectives The course is also designed to cover many fundamental issues in quantitative text analysis such
Votes∗
, 2014
"... School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engag ..."
Abstract
- Add to MetaCart
(Show Context)
School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain. You may freely distribute the
Government in Crisis: Opening the “Black Box ” of Intra-Cabinet Competition Over Budgetary Allocation∗
, 2014
"... With the onset of the current economic and financial crisis in Europe, questions about the power of core executives to control fiscal outcomes are more important than ever. Why are some governments more effective in controlling spending while others fall prey to exces-sive overspending by individual ..."
Abstract
- Add to MetaCart
(Show Context)
With the onset of the current economic and financial crisis in Europe, questions about the power of core executives to control fiscal outcomes are more important than ever. Why are some governments more effective in controlling spending while others fall prey to exces-sive overspending by individual cabinet ministers? We approach this question by opening the “black box ” of intra-cabinet decision-making. Using individual cabinet member’s contri-butions to budget debates in Ireland, we estimate their positions on a latent dimension that represents their relative levels of support or opposition to the cabinet leadership. We find that ministers who are close to the finance minister receive a larger budget share, but under worsen-ing macro-economic conditions closeness to the prime minister is a better predictor for budget allocations. Our results, therefore, show that the effectiveness of delegating fiscal authority crucially depends on the economic environment. Key Words: Intra-cabinet bargaining, budgetary politics, fiscal governance, text analysis ∗Author names are listed in alphabetical order. Authors have contributed equally to all work. 1
Putting Text in Context: How to Estimate Better Left-Right Positions by Scaling Party Manifesto Data using Item Response Theory∗
, 2014
"... For over three decades, party manifestos have formed the largest source of textual data for estimating party policy positions and emphases, resting on the pillars of two key assump-tions: that party policy positions can be measured on known dimensions by counting text units in predefined categories, ..."
Abstract
- Add to MetaCart
(Show Context)
For over three decades, party manifestos have formed the largest source of textual data for estimating party policy positions and emphases, resting on the pillars of two key assump-tions: that party policy positions can be measured on known dimensions by counting text units in predefined categories, and that more text in a given category indicates stronger emphasis. Here we revisit the inductive approach to estimating policy positions from party manifesto data, demonstrating that there is no single definition of left-right policy that fits well in all contexts, even though meaningful comparisons can be made by locating parties on a single dimension in each context. To estimate party positions, we apply a Bayesian, multi-level, Poisson-IRT measurement model to category counts from coded party mani-festos. By treating the categories as “items ” and policy positions as a latent variable, we are able to recover not only left-right estimates but also direct estimates of how each policy category relates to this dimension, without having to decide these relationships in advance based on political theory, exploratory analysis, or guesswork. Finally, the flexibility of our framework permits numerous extensions, designed to incorporate models of manifesto au-thorship, coding effects, and additional explanatory variables (including time and country effects) to improve estimates.