Results 1  10
of
33
Where do interorganizational networks come from?’, working paper
, 1997
"... Organizations enter alliances with each other to access critical resources, but they rely on information from the network of prior alliances to determine with whom to cooperate. These new alliances modify the existing network, prompting an endogenous dynamic between organizational action and network ..."
Abstract

Cited by 138 (5 self)
 Add to MetaCart
Organizations enter alliances with each other to access critical resources, but they rely on information from the network of prior alliances to determine with whom to cooperate. These new alliances modify the existing network, prompting an endogenous dynamic between organizational action and network structure that drives the emergence of interorganizational networks. Testing these ideas on alliances formed in three industries over nine years, the authors show that the probability of a new alliance between specific organizations increases with their interdependence, but also with their prior mutual alliances, common third parties, and joint centrality in the alliance network. The differentiation of the emerging network structure, however, mitigates the effect of interdependence and enhances the effect of joint centrality on new alliance formation. 3
Applying quantitative marketing techniques to the Internet
 Interfaces
"... Quantitative models have proved valuable in predicting consumer behavior in the offline world. These same techniques can be adapted to predict online actions. The use of diffusion models provides a firm foundation to implement and forecast viral marketing strategies. Choice models can predict purcha ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Quantitative models have proved valuable in predicting consumer behavior in the offline world. These same techniques can be adapted to predict online actions. The use of diffusion models provides a firm foundation to implement and forecast viral marketing strategies. Choice models can predict purchases at online stores and shopbots. Hierarchical Bayesian models provide a framework to implement versioning and price segmentation strategies. Bayesian updating is a natural tool for profiling users with clickstream data. I illustrate these four modeling techniques and discuss their potential for solving Internet marketing problems.
Competitive price discrimination strategies in a vertical channel using aggregate retail data
 Management Science
, 2003
"... We explore opportunities for targeted pricing for a retailer that only tracks weekly storelevel aggregate sales and marketingmix information. We show that it is possible, using these data, to recover essential features of the underlying distribution of consumer willingness to pay. Knowledge of this ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We explore opportunities for targeted pricing for a retailer that only tracks weekly storelevel aggregate sales and marketingmix information. We show that it is possible, using these data, to recover essential features of the underlying distribution of consumer willingness to pay. Knowledge of this distribution may enable the retailer to generate additional profits from targeting by using choice information at the checkout counter. In estimating demand we incorporate a supplyside model of the distribution channel that captures important features of competitive pricesetting behavior of firms. This latter aspect helps us control for the potential endogeneity generated by unmeasured product characteristics in aggregate data. The channel controls for competitive aspects both between manufacturers and between manufacturers and a retailer. Despite this competition, we find that targeted pricing need not generate the prisoner’s dilemma in our data. This contrasts with the findings of theoretical models due to the flexibility of the empirical model of demand. The demand system we estimate captures richer forms of product differentiation, both vertical and horizontal, as well as a more flexible distribution of consumer heterogeneity.
Modeling Multiple Sources of State Dependence in Random Utility Models: A Distributed Lag Approach
 Marketing Science
, 2003
"... We propose a utilitytheoretic brandchoice model that accounts for four different sources of state dependence: 1. effects of lagged choices (structural state dependence), 2. effects of serially correlated error terms in the random utility function (habit persistence type 1), 3. effects of serial co ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We propose a utilitytheoretic brandchoice model that accounts for four different sources of state dependence: 1. effects of lagged choices (structural state dependence), 2. effects of serially correlated error terms in the random utility function (habit persistence type 1), 3. effects of serial correlations between utilitymaximizing alternatives on successive purchase occasions of a household (habit persistence type 2), and 4. effects of lagged marketing variables (carryover effects). Our proposed model also allows habit persistence to be a function of lagged marketing variables, while accommodating the effects of unobserved heterogeneity in household choice parameters. This model is more flexible than existing statedependence models in marketing and labor econometrics. Using scanner panel data, we find structural state dependence to be the most important source of state dependence. Marketingmix elasticities are systematically understated if statedependence effects are incompletely accounted for. The Seetharaman and Chintagunta (1998) model is shown to recover spurious varietyseeking effects while overstating habitpersistence effects. Ignoring habit persistence type 1 leads to an underestimation, while ignoring habit persistence type 2 leads to an overestimation of structural statedependence effects. We find lagged promotions to have carryover effects on habit persistence. Ignoring one or more sources of state dependence underestimates the total incremental impact of a sales promotion. We draw implications for manufacturer pricing.
An empirical comparison of logit choice models with discrete versus continuous representations of heterogeneity
 Journal of Marketing Research
, 2002
"... Currently, there is an important debate about the relative merits of models with discrete and continuous representations of consumer heterogeneity. In a recent JMR study, Andrews, Ansari, and Currim (2002; hereafter AAC) compared metric conjoint analysis models with discrete and continuous represent ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Currently, there is an important debate about the relative merits of models with discrete and continuous representations of consumer heterogeneity. In a recent JMR study, Andrews, Ansari, and Currim (2002; hereafter AAC) compared metric conjoint analysis models with discrete and continuous representations of heterogeneity and found no differences between the two models with respect to parameter recovery and prediction of ratings for holdout profiles. Models with continuous representations of heterogeneity fit the data better than models with discrete representations of heterogeneity. The goal of the current study is to compare the relative performance of logit choice models with discrete versus continuous representations of heterogeneity in terms of the accuracy of householdlevel parameters, fit, and forecasting accuracy. To accomplish this goal, the authors conduct an extensive simulation experiment with logit models in a scanner data context, using an experimental design based on AAC and other recent simulation studies. One of the main findings is that models with continuous and discrete representations of heterogeneity recover householdlevel parameter estimates and predict holdout choices about equally well except when the number of purchases per household is small, in which case the models with continuous representations perform very poorly. As in the AAC study, models with continuous representations of heterogeneity fit the data better.
Success in hightechnology markets: Is marketing capability critical
 Marketing Science
, 1999
"... We propose a conceptual framework—with the resourcebased view (RBV) of the firm as its theoretical underpinning—to explain interfirm differences in firms ’ profitability in hightechnology markets in terms of differences in their functional capabilities. Specifically,we suggest that marketing,R&D,an ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We propose a conceptual framework—with the resourcebased view (RBV) of the firm as its theoretical underpinning—to explain interfirm differences in firms ’ profitability in hightechnology markets in terms of differences in their functional capabilities. Specifically,we suggest that marketing,R&D,and operations capabilities,along with interactions among these capabilities,are important determinants of relative financial performance within the industry. This paper contributes to the RBV literature by proposing the inputoutput perspective to conceptualize the notion of capabilities. Specifically,this approach entails modeling a firm’s functional activities—viz.,marketing,R&D and operations—as transformation functions that relate the productive factors/resources to its functional objectives,if the firm were
A Note on the Estimation of the Multinomial Logit Model with Random Effects
 The American Statistician
, 2001
"... The multinomial logit model with random effects is often used in modeling correlated nominal polytomous data. Given that there is no standard software of fitting it, we advocate using either a Poisson loglinear model or a Poisson nonlinear model, both with random effects. Their implementations can ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The multinomial logit model with random effects is often used in modeling correlated nominal polytomous data. Given that there is no standard software of fitting it, we advocate using either a Poisson loglinear model or a Poisson nonlinear model, both with random effects. Their implementations can be carried out easily by many existing commercial statistical packages including SAS. A brand choice data set is used to illustrate the proposed methods. KEY WORDS: Discrete choice model
Structural Applications of the Discrete Choice Model
, 2001
"... A growing body of empirical literature uses structurallyderived economic models to study the nature of competition and to measure explicitly the economic impact of strategic policies. While several approaches have been proposed, the discrete choice demand system has experienced wide usage. The het ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A growing body of empirical literature uses structurallyderived economic models to study the nature of competition and to measure explicitly the economic impact of strategic policies. While several approaches have been proposed, the discrete choice demand system has experienced wide usage. The heterogeneous, or “mixed”, logit in particular has been widely applied due to its parsimonious structure and its ability to capture flexibly substitution patterns for a large number of differentiated products. We outline the derivation of the heterogeneous logit demand system. We then present a number of applications of such models to various data sources. Finally, we conclude with a
Editorial: Errors in the Variables, Unobserved Heterogeneity, and Other Ways of Hiding Statistical Error
 Marketing Science
, 2006
"... One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e.g., measurement error) or faulty theories. Testing theories, however, differs from estimating unknown parameters in known relationships. When testing theories, it is sufficiently dangerous to cure inconsistencies by adding observed explanatory variables (i.e., beyond the theory), let alone unobserved explanatory variables. Adding ad hocexplanatory variables mimics experimental controls when experiments are impractical. Assuming unobservable variables is different, partly because realizations of unobserved variables are unavailable for validating estimates. When different statistical assumptions about error produce dramatically different conclusions, we should doubt the theory, the data, or both. Theory tests should be insensitive to assumptions about error, particularly adjustments for error from unobserved variables. These adjustments can fallaciously inflate support for wrong theories, partly by implicitly underweighting observations inconsistent with the theory. Inconsistent estimates often convey an important message—the data are inconsistent with the theory! Although adjustments for unobserved variables and ex post information are extraordinarily useful when estimating known relationships, when