The Spectacular Commodification Of Language

Daniel Gianfranceschi

The Spectacular Commodification Of Language

According to a recent study, the average American reads about 17 books per year. It should come as no surprise that this number is rather staggeringly low; not even by comparison to other countries, but more so on an ontological level. That’s about one and half books per month. If we speculate that the books in question are neither Joyce’s “Ulysses” nor any other 800+ pages book, the number is rather alarming. Not alarming because everybody should read 800+ pages books – which is, let’s be frank, rather impossible if we consider the modern day work load and general schedule – but because this means that people, at large, at becoming ever more uninterested in stories that stem from the written word. In fact, even when writing is essential for a certain kind of product – like a movie’s script, for example – the end result is in another medium entirely, making the content more easily digestible for the general audience. In it and of itself this is certainly only a pragmatic choice given the constrains of any chosen medium, but it should be noted that there is an enormous difference in the way we retain information when read and/ or when watched on the ubiquitous big black screen. Furthermore, it seems that language itself quickly falls into the background of the story, especially in film. This is precisely why well written movies, like “American Fiction”, “A Real Pain” and certainly many others, stick out so much; language itself, when used properly, is what “makes” the film, and the story follows along into a synergy of sorts. The same, unfortunately, cannot be said for the Marvel franchise.
Even more aggravating is the continued subjugation of language in favor of simple(r) subtitles, in which the jest of the conversation still rings true, but using a completely different set of verbs and, often, alluding to completely different scenarios than those that are actually being said by the movies’ characters. Naturally, subtitles need to be on the screen long enough for the viewers to actually be able to read along, which would imply that sometimes, it’s actually tedious to read along longer monologues and phrases that include “difficult” – we shall return to this notion later on – words. Still, it remains rather perplexing that a movies’ chosen words are then reappropriated and turned into something else entirely, especially given that the subtitled sentenced are always and exclusively simplified, and never made even more difficult to read along with. For example, the word “tiny” might be replaced in favor of the word “small/smaller”, but never for the word “infinitesimal”. Sure, the tree have slightly different meanings, but nobody binge watching their latest television obsession will question the linguistic merits of any of these words. Moreover, such inquisitive behavior into how we consume language makes it seem as if there are certain words that are inherently “better” than others, meaning that some words seem more easily understandable by a large majority of the population, making them intrinsically better for commercial use. This is certainly a valid observation for those that aims to guillotineize language as a whole, but let us consider the inverse for a second: let us consider a world in which words, like all those written on this page or anywhere else in the world, have meaning; a kind of meaning that makes them better suited for certain scenarios and less so for others; a world in which there is no diminutive of language itself because, in the scope of a movie translation, it is given the respect it deserves. Not because of some elitist war against the seemingly uneducated – which is never the actual reality – but because words, usually and in a professional setting, are used precisely because of their distinctive meaning, making them the complete opposite of interchangeable. The example is surely faulty to some degree, but it would be as if one were to take, just because it was already mentioned, Joyce’s “Ulysses” and transcribe it into “simpler language”, thus missing the point of the piece of work entirely.
In fact, reading – and thus language – is hard work. Reading a book requires a kind of commitment that watching a movie does not (as long as it isn’t a seven-hour, Bulgarian epic, for example). This kind of commitment is less due to the fact that a book is sequential and a movie is rather simultaneous – at least in a way a book surely is not, given the physical effort it takes to, for example, turn a page – and more so because of how language must be understood on a syntactical level for it to become picturesque – which, one could argue, is the quality the best of books inherently attain/ strive towards – and understandable, whereas a movie functions more in the intertwining of the syntactical and the visual (the ladder becoming the scapegoat for the former). Yet, one could argue that there is another key element to the increasing difficulty of and in reading that goes far past the purely grammatical/ syntactical. This other aspect is perhaps a byproduct of the constant evolution in neologisms and the apparent assumption that there is no “wrong” word anymore – not in the semantic and pejorative sense, because there obviously are, but more so in the hierarchical -, or rather that, as is apparent by the simplification of language in popular media, that words are completely arbitrary and interchangeable at the blink of an eye. This does not refer to words like “gay” previously describing a feeling of joy and less a sexual preference, no. This refers to the presumed synonymity of words popularized through social media and the internet that encompass the general feeling and meaning of other words that would otherwise be either too “complicated”, too long for a short description of your new holiday portrait or the convoluted to send to your new lover in order to praise their appearance. Thus, the holiday portrait quickly becomes a “selfie”, the awe at another’s’ beauty a simple “slay” and the complicated “this is spectacular” a perennial “wow”.
It could be arguable that such words – and their many other companions, like “queen”, “serve”, “wifey material”, “main character”, “NPC” (which surprisingly does not stand for “no pronounced correctly” but it kind of should), “POV”, “Delulu” (which cannot be typed without suffering a minor stroke, of sorts), “Ate” (which still leaves us wondering what was eaten and why, without the possibility of ever delivering an answer to said question), “Fire” (which is neither a statement and nor a verb but an element, a quite dangerous one at that), “Cap” (not the one one usually wears on their head), “Bussin” (I’m not even going to try on this one), “Glow up” (which might seem to imply a certain kind of light being triggered, but I suspect there are no lightbulbs in sight), “Cooked” (even if dinner was already served), “For the plot” (assuming there to be some grand, overarching narrative when, in reality, it’s mostly about bad hair dye jobs, questionable dating histories and a rather drastically romantic understanding of spontaneity) and the bajillion of others – are a direct result of our incapability of staying interested and, thus, reflective of our ever declining attention span. “Delulu”, for example, is only one syllable away from it’s grammatically-correct counterpart, “delusional”, but it seems that just this one more syllable is already too much for an algorithm-driven, new language. Now, this is starting to sound very much like the rant of an old boomer – yes, this one was on purpose – which is certainly not the end goal, especially because of language’s flexible, fluxuating nature; in fact, this precisely what makes language so interesting and democratic, as long as one does not live under a dictatorship. Furthermore, the situation we are faced is, thankfully, nothing like that imaged in “Fahrenheit 451”, for one still has the right to chose how one uses language, how one adapts their own vocabulary to the current times and how one wishes to express themselves. In fact, none of the words mentioned above are technically “wrong”, especially under the flexible parameters of language, but there are indicative of how we, societally, approach language. Even more so, this new kind of global vocabulary/ internet vernacular is, perhaps, more indicative of the word we chose not to say rather than what we imply with it. To say “the ate in that outfit” is, as long as one is chronically online, generally understood as a replacement for describing how good someone looked in a particular setting, which is all fine and dandy as long as everybody is in on the slang. What one omits to consider under these parameters is the fact that to describe precisely how and in what way someone looked good is, sort of, part of the process of a compliment. To say one “ate”, even if generally understood, means a whole lot of very little because it has no diminutive and no augmentative. So, if everybody just “eats” all the time, it could be arguable that everybody, apparently, eats the same amount as others, thus nullifying the statement altogether. If, instead, one was to say that “the yellow dress looks good on you” or “the blue shirt matches the pants”, there would surely be more specificity in such a statement. So, if everybody eats, apparently nobody reaaaaaaally eats.
Beyond this, it is rather surprising that for a generation that yearns for individualism – the words mentioned above are, to be fair, mostly used by people, one could generalize, under 35 –, the language they speak is homologated to the point of defeating the presumed idiosyncrasy that is actually strived for. In fact, the commodification of language in young-adults would lead one to believe that the wish is not for individualism and diversity, but for a homogenous vernacular that evades details, favors simplicity and yet, paradoxically, remains rather vague about its own end goal. To “do something for the plot” would seem to imply that one views the life they lead as a linear timeline, as something that has narrative, when, in reality, such narrative is almost always crafted day by day. The sort of “third person view” on one’s own life could lead some to believe that these new words and descriptions are, to talk the talk, actually all “for the plot”; a plot that never arrives but always delivers; one that sees these new, short-form expressions as something exciting, something that breaks the constrains of Websters dictionary, Joyce’s “Finnegan’s Wake” and the semantic horizon, but at what cost? Certainly, Joyce would rejoice at people openly screaming “delulu” at their best friends – which, the more you say it, starts to sound like a birdcall or a reminder of that one ostentatiously horrible Metallica and Lou Reed album) – but how must the word “delusional” feel at this poor sight? Neglected and lonely, I presume. Moreover, if everything can, supposedly, be described as a one-to-three word “thingy”, what inherent value does that attribute to the activity or the feeling, of any at all? Shouldn’t some things be purposefully hard and nuanced to describe? Or, to put it another way: could scenarios, feelings, intuitions and the likes thereof that are objectively more banal than others – a make-up tutorial vs. a juridical process, for example – not be viewed with the same kind of reverence that both activities could attain, when done right? In fact, the critique here is not the content of the actual activity – for “serving” in an outfit can be as laborious as other, perhaps objectively more valuable, actions – but the way in which the content is subsidized by short-form phrases and/ or sayings that seem to invalidate the content altogether.
Naturally, such sayings and phrasings are just another guise of the all too familiar metaphor, for, as we have learned, to “eat” does not mean to consume food – in the context we have framed it here – but to slay…ahm, I mean, to look good. Shit, now I fell for it too. Still, we seem to forget that for a metaphor to be effective it ought to deliver on its promise of the word becoming picturesque, which is where many doubts start to arise: when I think of the word slay, I intuitively think of the emoji of the freshly manicured and purple-painted nails, which, depending on how one views this statement, is either moronic or deeply funny. But that’s kind of it, right? There is no afterthought, no resting place for the nuances implied but forgotten by the word “slay” and certainly no closure. Everything implied in “slay” is, then, rather left in an eerie haze of half-knowledge and self-evidence, bordering on the laconic. Words and their meaning thus become less descriptive and more situational, in the sense that they seem to, under certain parameters, be better than saying nothing at all but also not enough to actually be saying anything of value, thus completely hollow and even less than arbitrary. If, instead, these phrasings are the result – or the protest against – of the aimlessness of the twenty-first century, one must ask the question: is this really what rebellion looks and sounds like? Is a world in which the word “kill” has been replaced by “unalive” at all able to be anything more than a caricature of the promise it once held? To merely describe something/someone as a “green flag” or a “red flag” – the former implying a generally positive connotation and the ladder referring to a generally negative connotation – can’t ever really be enough to know the “thing” fully, can it?
Certainly, the sempiternal diminutivization of language is necessary in spaces where the attention span is not intended to be anything worth writing home about. Social media and dating platforms are surely valid digital rooms in which the goal of an exchange does not need to be the accuracy of the implied linguistic tropes, nor should one ever feel guilty of indulging in the occasional “slay” or fire-emoji. That being said, it seems that the inverse should hold equally as much value, if not more, especially in a world that is slowly accepting the volatile and transmuted nature of language as a whole. Under this reading, the word “trauma”, for example, has been the subject of a quite strange reevaluation, especially by the younger generation; it has become a verb, finite in itself and tautological of what it implies. Yet, one mustn’t forget that trauma is far more than just a buzz word and, at that, a real physical response. Thus, the word and its meaning forego ever becoming anything more than just a hint of their own meaning, never finalizing themselves. It seems, one could argue, that we are tentatively becoming bored of language itself; bored of its true meaning, bored of what words might imply, bored at any possible response, so much so to omit it altogether in favor of a vernacular that is as descriptive as a white canvas. For every monosyllabic expression, it’s true essence seems to die along the way.

Daniel Gianfranceschi

Daniel Gianfranceschi (1999, living and working between Munich and Bardolino) is a multidisciplinary artist working within the realms of painting, writing and sound. Gianfranceschi previously studied fashion management under Prof. Sabine Resch & Prof. Markus Mattes and is now continuing his studies in painting and sound at the Academy of Fine Arts under Prof. Florian Pumhösl & Prof. Florian Hecker. Exhibitions and performances were held at Kunstpavillon München, Künstlerhaus Stuttgart, Württembergischer Kunstverein and Goethe-Institut Athens, among others. Writing contributions have been featured in Erratum Press, Cutt Press, Positionen Magazin, Frameless Magazin, Sleeve Magazin and more.

Back to Issue
Also in this thread
This thread has no other posts