The dynamics and potentials of big data for audience research https://doi.org/10.1177/ XXXXXXXXXX Media, Culture & Society 2018, Vol XXXXXXXXXX –74 © The Author(s) 2017 Reprints and permissions:...

1 answer below »
Please summarise the main points of the article attached-

The dynamics and potentials of big data for audience research.

Bullet points are fine. Most importantly pleasesummarisethe author'sargument.


The dynamics and potentials of big data for audience research https://doi.org/10.1177/0163443717693681 Media, Culture & Society 2018, Vol. 40(1) 59 –74 © The Author(s) 2017 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443717693681 journals.sagepub.com/home/mcs The dynamics and potentials of big data for audience research Adrian Athique University of Queensland, Australia Abstract This article considers the future of audience research in an era of big data. It does so by interrogating the dynamics and potentials of the big data paradigm in an era of user- generated content and commercial exploitation. In this context, it is proposed that the major dynamics of big data are a conjoint application of numerology and alchemy in the information age. On this basis, the potentials of new data techniques are addressed in light of the critical gap between audience data and the audiences themselves. Keywords alchemy, audience research, big data, cultural studies, data mining, numerology, predictive analysis, social media Electronics digested a mix of digital numerology and alchemy, collecting metadata as input to pattern recognition algorithms, breathing life into a machine capable of doing what men and women spent a century trying to do Carvalko (2016: 121). Audience research has entered the era of ‘big data’, a paradigm emerging from two dec- ades of innovative and aggressive information management. In the context of this data arcadia, the need to reconsider our epistemological premise may be less apparent than the sudden expansion of the methodological toolkit, but it is no less pressing. With this in mind, my proposition is that we need to consider both the dynamics and the potentials of such techniques. Media dynamics are determined in this instance as the presumptions, imperatives and motives that shape the paradigm itself, along with the interaction of Corresponding author: Adrian Athique, Level 4 Forgan Smith Building, The University of Queensland, St Lucia, QLD 4272, Australia. Email: [email protected] 693681 MCS0010.1177/0163443717693681Media, Culture & SocietyAthique research-article2017 Original Article https://uk.sagepub.com/en-gb/journals-permissions https://journals.sagepub.com/home/mcs mailto:[email protected] http://crossmark.crossref.org/dialog/?doi=10.1177%2F0163443717693681&domain=pdf&date_stamp=2017-02-01 60 Media, Culture & Society 40(1) institutional forces at play in the utilisation of audience data. With the ground set by those dynamics, the media potentials of big data procedures emerge from both the appli- cations and the possible outcomes of these techniques. Since audience research is social research, these outcomes have to be understood outside of the computational process, that is, in terms of likely implications for the operators, clients and subjects of big data. Indeed, it is fair to assume that audience researchers will themselves occupy any one, or perhaps all, of these classes in the course of their work. In establishing the dynamics of big data, I make the assertion here that the two primary motivations of this paradigm stem from two longstanding preoccupations of human science, namely numerology and alchemy. Having, I hope, established this claim I will seek to establish the potentials that are becoming apparent due to the increasing centrality of audience data in academic research. User-generated content to self-replicating automata YouTube (as with other instances of YouMedia) is an exemplar of the digital economy precisely because it is a medium without any content of its own. It relies upon its users to supply the value of the service, and to do so freely and without payment (Andrejevic, 2011). User-led systems have become predominant in an era where media production technologies are cheap and media distribution has been ‘liberated’ from both expert intermediaries and costs. With the digital future looking highly retrospective and/or mundane at the level of content, the primary logics of the digital media industries have centred upon capturing the commercial value of the World Wide Web in other ways. The answer to the free content conundrum appears to have been found in the unique proper- ties of the Internet as a medium of record, where every action produces its own data point. At the turn of the millennium, the introduction of user tracking into web browsers and the individual addressability of devices both made it theoretically possible (and per- haps, more critically, permissible) to identify individual users. Subsequently, the ‘Web 2.0’ project was bankrolled by an Internet-advertising boom for two reasons: First, it created a vast body of detailed consumer profiles and click trails and second, individual users could be more effectively targeted with advertising on the basis of this information. These motivations were reflective of broader shift in commercial logic, where credit card companies, Internet service providers (ISPs) and online retailers all woke up to the sec- ondary usage potentials of their transaction records. The commercial value of a network system increases exponentially as the user base expands. In the 2000s, Google’s growing monopoly on basic search facilities left the company uniquely placed to aggregate profiles of users viewing habits (Halavais, 2009). This capability allowed Google to aggressively market Internet-advertising and rapidly become one of the world’s largest companies (Levy, 2011). At the other end of the scale, user tracking allowed for people to be picked out within that vast space, matching uni- versal reach with individual addressability. The ‘personal’ look and feel of the concomi- tant digital culture nonetheless rest upon a faux individuation, given the vast bulk of actual usage remains centred upon mass-produced commodities and universal formats. This unprecedented standardisation of YouMedia is itself fundamental, since the under- lying economics of Web 2.0 rest upon the real-time application of targeted marketing via Athique 61 automated algorithms. For this to work, it is essential that user-input variables operate in a recognisable series. That is why the prevailing form of ‘identity’ in digital culture has been imposed through the clumsy mash-up of the fan survey, dating ad and curriculum vitae (CV) formats. Facebook, like YouTube, effectively categorises human beings on the basis of the films, music and sport teams that they endorse, while also working hard to establish where we were born, where we work and who we know (Cheney-Lippold, 2011). Since this is a global system, everyone in the world must conform to this (let us face it) ridiculous template of human identity as best they can. Despite the obvious shortcomings of commercially defined digital profiling, it remains the case that sufficient scale confers numerical value to even the most clumsy survey instrument. In that respect, the global scale at which contemporary social media plat- forms operate is inconceivably vast (two billion is an abstract only tangible, perhaps, to a seasoned quant). Nobody has ever had so much data, nor had it in an individuated and relational structure so purposefully designed for correlation. Unlike an earlier era, where market researchers would examine consumers in direct relation to a given product, the vast datasets of the social media era are purposefully intended to automatically correlate past and potential behaviours in relation to all or any products, activities or actions. This constitutes something of a tipping point in the informational economy. It became possi- ble only because processing power increased exponentially (as anticipated by Moore’s law). It became possible only because data storage became inconceivably vast and cheap. It became possible only because a user-led medium of record removed the need to employ millions of data entry clerks to capture the data. With this architecture in place, the engineers of Web 2.0 consciously initialised a chain reaction in the generation of data. The ultimate output of this vast experiment is a mode of informatics where particu- lar logics of correlation can be deployed to create new knowledge from the raw material (see Zafarani et al., 2014). To further draw out the analogy between applied nuclear physics and informational physics, this is the point at which more energy is coming out of the process than is going into it. The magic system For Technorati, whose desired crop is readily comparable data, social information must be collected somewhat generically. By firmly anchoring user activities to a set of profil- ing systems, this vast audience is systematically captured as an informational commod- ity. The potentials of this commodity are largely determined by what we understand data to be, and this understanding has evolved in correspondence with the evolution of infor- mation technology (see Puschmann and Burgess, 2014). In ancient times, data were taken as a priori, as something given (whether that was a place, a thing or a point in time). With the rise of natural sciences, data were reinterpreted as an indexical record of an established fact. As these facts began to proliferate, and were applied to human sub- jects by social scientists, these records became the primary resource for governance. Emerging from this legacy, the binary logics of the computer revolution have subse- quently redefined data as simply a unit of information and, thus, essentially an integer entered into or derived from an algorithmic process. In this context, the collection, analy- sis or manipulation of data becomes a mathematical exercise, regardless of what we want 62 Media, Culture & Society 40(1) to do with our YouTube dataset on popular culture. For the purposes of the computational process alone, it is fundamentally irrelevant whether this information being unitised is about people or brightly coloured rocks. Nonetheless, in a dynamic system that relies upon large-scale inputs from its user base, the human contribution to the generative capacity of numerical data becomes highly significant. For some years now, a whole series of propositions have been made regarding the potentials of directing these large numbers towards the resolution of mathematical prob- lems (Howe, 2009; Shirky, 2008; Surowiecki, 2004). Mass participation through a global interactive system apparently realises one of the major aspirations of computer science: the advent of infinitely regenerative data. Billions of users continuously inputting numer- ical sequences that can then be used to generate an effectively infinite series of calcula- tions promise a mathematical manifestation of Von Neumann’s (1966) self-replicating automata. Proposals to harvest the value of such large-scale participation for this purpose often have fantastic motivations, but this is not what I would define as numerology (e.g. Kurzweil, 2005). The numerological dimension of big data arises instead from the analy- sis of numerical trends in those inputs in order to make inferences about the future. Meteorology and stock market trading are established practices of this kind, largely determined by mathematical predictions derived from a comprehensive, but tightly defined, dataset. With the advent of Web 2
Answered Same DayMay 03, 2020

Answer To: The dynamics and potentials of big data for audience research https://doi.org/10.1177/ XXXXXXXXXX...

Shivangi answered on May 04 2020
140 Votes
· In the information age, the collaborative application of alchemy and numerology are the substantial dynamics of “Big Data”.
· In this millennium, user led systems like YouTube, Google etc have become more pre-dominant as they collect data from their users rather than having it of their own. With the expansion of user base, commercial value of network system increases exponentially. This subsequently led to release of Web 2.0.
· The user information is...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here