Chat with us, powered by LiveChat 4 ppt slides with speaker notes section | Refine papers

2 slides for the pdf articel, Read the passage and summarize it to find some highlights

another 2 slides for this video.

Carnegie Center for Engaged Scholarship.
Assessment in Action – The Carnegie Elective Classification on Community Engagement

one slide should summarize the video and find highlights, another slide should focus on how the center in the video can help other students or universities

Assessing the Societal Impact of Research: The Relational Engagement Approach
Julie L. Ozanne, Brennan Davis, Jeff B. Murray, Sonya Grier, Ahmed Benmecheddal, Hilary Downey, Akon E. Ekpo, Marion Garnier, Joel Hietanen, Marine Le Gall-Ely, Anastasia Seregina, Kevin D. Thomas, and Ekant Veer

Marketing and policy researchers aiming to increase the societal impact of their scholarship should engage directly with relevant stakeholders. For maximum societal effect, this engagement needs to occur both within the research process and throughout the complex process of knowledge transfer. The authors propose that a relational engagement approach to research impact complements and builds on traditional approaches. Traditional approaches to impact employ bibliometric measures and focus on the creation and use of journal articles by scholarly audiences, an important but incomplete part of the academic process. The authors recommend expanding the strategies and measures of impact to include process assessments for specific stakeholders across the entire course of impact, from the creation, awareness, and use of knowledge to societal impact. This relational engagement approach involves the cocreation of research with audiences beyond academia. The authors hope to begin a dialogue on the strategies researchers can use to increase the potential societal benefits of their research.

Keywords: research impact, relational engagement, transformative consumer research, societal benefit, impact assessment

Julie L. Ozanne is Professor of Marketing, Department of Marketing and Management, University of Melbourne (e-mail: julie.ozanne@unimelb. Brennan Davis is Hood Professor of Marketing and Associate Professor of Marketing, Orfalea College of Business, California Poly- technic State University (e-mail: Jeff B. Murray is Chair of the Department of Marketing and Logistics, Sam M. Walton College of Business, University of Arkansas (e-mail: jmurray@walton. Sonya Grier is Professor of Marketing, Kogod School of Business, American University (e-mail: Ahmed Benmecheddal is a Research Fellow, SKEMA Business School, Uni- versite? de Lille (e-mail: Hilary Downey is Lecturer in Management, Queen?s University Management School, Queen?s University Belfast (e-mail: Akon E. Ekpo is Assistant Professor of Marketing, Rutgers University (e-mail: akon. Marion Garnier is Associate Professor in Marketing, SKEMA Business School, Universite? de Lille (e-mail: marion.garnier@ Joel Hietanen is an Assistant Professor, Stockholm University (e-mail: Marine Le Gall-Ely is Professor, Universite? de Bretagne Occidentale (e-mail: Anastasia Seregina is a doctoral candidate, Department of Marketing, Aalto Uni- versity School of Business, Aalto University (e-mail: anastasia.seregina@ Kevin D. Thomas is Assistant Professor in Advertising and Public Relations, University of Texas at Austin (e-mail: kevin.thomas@utexas. edu). Ekant Veer is Associate Professor of Marketing, Department of Management, Marketing, and Entrepreneurship, University of Canterbury (e-mail: This article emerged from a di- alogue that began at the 2013 Transformative Consumer Research confer- ence. Josh Weiner served as associate editor for this article.

ournal of Public Policy & Marketing (JPPM) has a relatively high journal impact factor in part because it publishes research that grapples with timely and practical policy problems. For example, recent special issues of JPPM have focused on important social topics such as consumption constraints, social entrepreneurship, transformative consumer research, and marketplace diversity and inclusion. In a JPPM editorial, Editor in Chief David Stewart (2013) calls for in- creasing research that serves public and societal interests. This essay offers a critical reflection on the nature of research impact sought by journals such as JPPM. Much is at stake in defin- ing the strategies and measures of research impact (Smith, Crookes, and Crookes 2013). The international business school accreditation body, Association to Advance Collegiate Schools of Business (AACSB), recently issued new standards to hold universities accountable; it states that high-quality intellectual contributions should ?impact the theory, practice, and teaching of business and management? (AACSB International 2013, p. 16). Emerging progressive tendencies in marketing (e.g., transfor- mative consumer research) promote exploration of new types of
research that will result in new measures of impact (Davis and Pechmann 2013; Mick et al. 2012; Ozanne 2011; O? z?ag?lar- Toulouse and Burroughs 2014). If research aimed at benefiting
society takes center stage, how will this research be performed and evaluated? The purpose of this article is to reflect on tra- ditional approaches to the meaning and assessment of impact, suggest a broader perspective across the complex process of societal impact, and encourage researchers to explore new forms of productive interaction with end users.

? 2017, American Marketing Association Journal of Public Policy & Marketing ISSN: 0743-9156 (print) Vol. 36 (1) Spring 2017, 1?14

1547-7207 (electronic) 1

DOI: 10.1509/jppm.14.121

2 Assessing the Societal Impact of Research

We propose a relational engagement approach, arguing that knowledge products created through persistent interactions between academics and other stakeholders are more likely to affect positive social change. Specifically, researchers employ- ing relational engagement approaches work more directly with the external constituency they hope to serve. Thus, they can potentially forge good-quality relationships that involve re- ciprocal interactions and colearning. Collaborative research potentially delivers a wider range of direct research outputs (e.g., the ability to build stronger social networks through which insights can be shared) than traditional research approaches. We believe more researchers should work with invested stakeholders, sharing the power to define problems and create and use knowledge that can benefit society. In this way, the research process becomes more multivocal; that is, it includes the in- terests and insights of the end users to balance rigor against relevance. We hope to inspire more debate on how to better fulfill the implicit social contract that academic research should enrich society.
First, we highlight the problem of defining ?research impact.? Second, we discuss traditional approaches to impact currently used in marketing. Third, we provide a framework for the strategies and measures of societal impact, which we call the relational engagement approach. Finally, we present exemplars in marketing that demonstrate the process of the relational engagement approach.

What Is Research Impact?
Governments, academic institutions, and funding agencies increasingly want some form of accountability for the financial resources invested in academic research (Wiek et al. 2014). In the largest experiment on research impact assessment to date, the United Kingdom uses the Research Excellence Frame- work (REF) to assess the impact of published academic re- search and allocate funds to projects that meet their evaluative criteria (REF 2014). Yet assessing research impact is difficult and controversial. Consider the recent use of the commercial software Academic Analytics to measure academic productivity as one component of impact and the debate arising over its accuracy and use (Wexler 2015). Still, funding agencies that invest millions of dollars into research justifiably worry about the translation of efficacious findings into practice. For example, a funding agency of medical research (a field highly scrutinized for its societal impact) might question whether the research contributes to the amelioration of disease (Ioannidis 2004). For a research team to chip away at this laudable goal, they must complete studies with rigor, publish their findings, gain visibility, influence practitioners, integrate their work into common practice, and, finally, demonstrate some measurable improvement (Weiss 2007). It is a formidable task to engage in this full process of research impact.
Given the importance of this process, it is unsurprising that even the meaning of ?impact? engenders considerable con- troversy. Confusion arises in part because ?impact? is often used to capture four different points in the process: the creation of the research, the awareness of the findings, the use of the research, and the potential societal benefits of the research (De Jong et al. 2014). The first meaning of impact is the creation of a knowledge product: a journal article, book chapter, con- ference presentation, model, theory, decision aid, or innovation,

to name but a few possible knowledge products. This stage is where academics most wisely invest their expertise. The second meaning of impact is the awareness of the findings, sometimes achieved by media interviews or press releases. The third meaning involves the use of the knowledge product, such as when a consumer, policy maker, or marketing manager adopts the research idea. For example, citation analysis captures the spread of a knowledge product, which is often the use of an article by another academic (Cote, Leong, and Cote 1991). The fourth meaning is when the knowledge product has societal benefits. For example, the definition of research impact by REF (2014) relates to this fourth meaning: ?an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia.? The first three steps are necessary but insufficient for achieving societal benefits.
We first explore the traditional approaches used to assess impact in marketing. Of course, the meaning and assessment of impact varies widely on the basis of the interests of the party evaluating the research (Weiss 2007). University administra- tors, for example, need measures of impact for assessing ed- ucator promotion and tenure and often prefer more quantifiable measures that focus on the direct outcomes of research invest- ments (Smith, Crookes, and Crookes 2013). To avoid using the term ?impact,? and thus conflating its multiple meanings, throughout the article we instead specify the creation, aware- ness, use, and societal benefits of research.

Traditional Approaches to Research Impact
The traditional approaches to impact focus primarily on the creation of knowledge outputs?the number of publications in highly ranked journals?and the use of the knowledge products measured by citations (Malhotra 1996; Sprott and Miyazaki 2002). Many databases exist for counting citations, including Google Scholar, Scopus, and Thomson Reuters Web of Science. Tracing use through citation patterns is consistent with the view of science and knowledge production in which researchers produce decontextualized theoretical knowledge that is politically neutral and flows linearly from academics to the general populace (Murray and Ozanne 2009). Scholars who publish highly cited articles have evidence that their work is being used. Citation metrics, though, quantify an individual researcher?s influence primarily within the schol- arly community. For example, an individualized ?h-index? combines both the citations and the number of publications into a single score (i.e., a scholar with an index of h has published N papers, and h of those N papers have at least h citations each; Hirsch 2005). A high h-index denotes a scholar?s high research creation and use within the academy. Institutional incentives reinforce this approach when these metrics are used in promotion and tenure processes. Similarly, the journal impact factor (JIF) serves as a metric for measuring the use of articles published in a journal (Shankar 2009). The JIF captures the average number of citations of articles published in the journal (Alexander, Scherer, and Lecoutre 2007). Journals with a high JIF are cited more than publications in lower or unranked journals. These bibliometric impact measures are easily available and quantifiable, making them the primary measure of research

error: Content is protected !!