What do We See When We See? 

Images, Power, and Perception in the Age of Artificial Intelligence

From Optics to Algorithms

The current fascination and fear surrounding artificial intelligence (AI), particularly generative AI, often portray it as an unprecedented rupture. Yet, a deeper historical and epistemological view reveals that AI is not an alien force but a continuation—indeed, an intensification—of older visual regimes that began with optical devices, matured with photography, and culminate today in synthetic image generation. As early as the Renaissance, painters employed the camera obscura and optical lenses to produce proto-photographic images; what David Hockney (2001) calls the “optical realism” of artists like Vermeer is, in this sense, already a form of technical seeing. Photography, therefore, is not the origin of image automation but its stabilization. AI continues this trajectory by displacing not just the hand and the eye, but increasingly the interpretative function once assumed to be exclusively human. However, to say that AI displaces authorship entirely would be misleading: current generative systems do not act independently—they require human prompts, frameworks, and interpretative scaffolding. Authorship, in this context, is not erased but redistributed. What AI exposes is not the end of intention, but a shift in how intention is mediated. This opens a speculative horizon for posthumanist thought, where some imagine a future in which machines could act, decide, or create autonomously. Yet, for now, AI’s ‘autonomy’ remains a projection—less a reality than a reflection of our own anxieties about control, agency, and obsolescence.

From Fixation to Generation: Epistemological Shifts in the Image

When photography emerged in the 19th century, it disrupted established notions of representation, authorship, and evidence. Walter Benjamin (1936/2008) famously argued that mechanical reproduction stripped the artwork of its aura, replacing ritual value with exhibition value. Yet the photograph retained a documentary function—it was still “of something.” AI-generated imagery, by contrast, divorces the image from referential anchoring altogether. As generative models synthesize images that have no referents in the real world, the notion of the image as evidence collapses. This shift represents not merely a technological evolution, but a deep epistemological rupture: the image ceases to be an index of reality and becomes a statistical hallucination.

What emerges, then, is a new crisis of trust. If the photograph once served as a visual proof, AI images are inherently ambiguous—not due to technical failure, but due to excessive plausibility. We are no longer confronted with blurred truth, but with seamless fiction that mimics the real too well. This moment reveals a deeper continuity: both photography and AI generate constructed, selective images. Neither medium offers a neutral document; both rely on acts of framing, omission, and interpretative coding. The difference lies in the degree of automation and the scale of inference. Where photography masked its constructed nature under the guise of mechanical objectivity, AI unveils and amplifies the artificiality of all image-making, challenging us to rethink our trust in the visual as such.

This crisis forces us to confront a deeper realization: the so-called documentary character of the image has always been an illusion. Both photography and AI-generated imagery are acts of framing, selecting, and excluding. They are constructions shaped by cultural codes, ideological contexts, and institutional demands. The idea that the photograph is an objective witness collapses when we acknowledge its entanglement with power. The difference today is that AI makes this construction more explicit—fiction is not a by-product of image-making but its very condition. Paradoxically, what used to be the domain of critical theory or artistic deconstruction is now enacted by the technology itself. Generative AI destabilizes the truth-value of the image from within, exposing its mechanics of production and calling for a new visual pact—one that accepts mediation, authorship, and perspective as integral to all forms of seeing.

Iconophobia and the Anxiety of Representation

This condition triggers what might be called a contemporary iconophobia—an inherited cultural anxiety toward images. As W. J. T. Mitchell (2005) suggests in What Do Pictures Want?, images are not passive objects but animated social agents that provoke desire, suspicion, even violence. In religious history, iconophobia emerges as the fear that images might usurp reality or divinity itself (Freedberg, 1989). In today’s AI debates, the same fear returns: the dread that images will act without authors, circulate without context, and create meanings independent of human control.

Bruno Latour (2002), in his concept of “iconoclash,” describes the paradox of modern image culture: we destroy images not out of disbelief, but because we fear their power to seduce, deceive, or reconfigure the symbolic order. AI-generated images, particularly deepfakes or synthetic portraits, intensify this tension. They are not simply false—they are uncanny in their believability, and therefore threatening. What is especially significant is that the destabilization of the image’s documentary value, once critiqued from within art and theory, is now being enacted by the tools themselves. Generative AI makes explicit what was once the domain of cultural critique: that the image is always constructed, always mediated, and never a transparent window onto reality. As Susan Sontag (2003) notes in Regarding the Pain of Others, images can simultaneously evoke empathy and numbness. In the case of AI, they evoke a deeper fear: that the image no longer needs the human to come into being.

This reception of AI, especially in the visual field, is often driven by a profound unease tied to the loss of representational control. AI appears to invent images without author, intention, or truth—provoking panic, censorship, and nostalgia for earlier, supposedly more trustworthy, modes of mediation. Yet this response is deeply iconophobic: what is feared is not the novelty of the images, but their capacity to make visible the instability of the image as such. The illusion of photography’s transparency has long concealed its inherent manipulation and selectivity. Today, AI shatters that illusion by rendering the construction of the image explicit.

There is a paradox at play: photography is retrospectively framed as honest and neutral, while AI is condemned as artificial and deceptive. This reveals an unconscious desire to preserve the image as a reliable mirror of the world—precisely at the moment when such faith is no longer tenable. The true disturbance lies not in the images AI generates, but in what they reveal about all images: their capacity to fabricate presence and truth. In this sense, AI does not threaten to replace the image, but to expose its ontological conditions.

From Scientific Vision to Delegated Agency

Marie-José Mondzain (2005) reminds us that images are never neutral carriers of knowledge—they can wound, seduce, and govern. In her pivotal work Image, Icon, Economy, she argues that images, far from being inert, hold political power precisely because they mediate the invisible and the real. Her question, “Can images kill?” is not rhetorical; it exposes the entanglement of visuality and violence, of seeing and acting. In her view, the Christian tradition plays a decisive role in this dynamic: the adoration of Christ as a suffering, crucified body—an image of divine absence and mortal pain—establishes a regime where the sacred is both visible and vulnerable. This produces a paradoxical idol: an image not of glory, but of death. Such a theology of the image shapes Western iconophilia and underpins our conflicted relations with visuality.

This becomes especially relevant in the context of AI-generated imagery, where images are created, circulated, and interpreted without clear intention or accountability. Despite the celebratory discourse around the democratization of image-making through AI, the actual capacity to produce meaningful, impactful images remains unevenly distributed. Generative systems may appear accessible, but their use demands a high level of rhetorical, cultural, and technical literacy. True democratization requires not just access to tools, but access to the symbolic, educational, and critical frameworks that shape how images are made and understood. In this light, AI does not flatten hierarchies—it often reconfigures them in subtler, more opaque ways. When the image escapes the author, its consequences can still be real. Here, Foucault’s (1969/1977) reflections on authorship are instructive: in his essay “What is an Author?” he dissociates authorship from origin and places emphasis on discourse as a system of rules, circulation, and power. Applied to AI, this suggests that authorship in synthetic images must be rethought not in terms of human intent but in terms of discursive positioning—how images are situated, framed, and mobilized within fields of knowledge and control.

Marie-José Mondzain (2005), in exploring the Byzantine origins of the image, argues that the power of images lies not only in their form, but in their symbolic and social function. For her, the image is inseparable from an economy of the visible: it produces belief, mobilizes affects, and defines what can or cannot be represented. By asking “Can an image kill?”, Mondzain invites us to recognize its potency as a political, not merely aesthetic, operator. It is important to note that she does not claim images kill by themselves; her question is philosophical and provocative, directed at contexts in which images participate in systems of symbolic and institutional violence—such as colonial, racist, or propagandistic imagery. In this sense, the image is not an autonomous agent of violence, but a structuring element within regimes of power that shape affects, justify actions, and exclude subjectivities.

In times of AI, when images are generated, reproduced, and consumed at scale without explicit human authorship, this reflection takes on new urgency. Images produced by neural networks are not neutral: they participate in a regime of visibility that reinforces certain beliefs while erasing others. Thus, AI does not simply produce images—it governs the imaginable. The growing anxiety surrounding the supposed “danger” of AI-generated images is often a new form of iconophobia—which, as Mondzain already noted, stems less from the content of the image and more from its circulation and the belief it generates. The real issue does not lie in the image itself, but in how it is embedded within networks of meaning, affect, and power.

Michel Foucault, in turn, by deconstructing the notion of the author, argues that authorship is not a source of meaning but a function of discourse. In his famous essay “What is an Author?” (1977), Foucault states that authorship must be analyzed as an institutional effect that organizes knowledge, regulates circulation, and establishes legitimacy. In the case of AI, this authorial function is even more visibly constructed: who “speaks” in the synthetic image? Who legitimizes it? Where does responsibility lie? AI makes the crisis of modern authorship explicit and demands a Foucauldian reading of the image as an utterance, as a dispositif of power. Instead of asking “who made this image?”, we should ask “what systems of knowledge and visibility make this image possible, legible, effective?”

Artificial intelligence did not emerge in a vacuum. It is the culmination of a long trajectory of visual technologies developed for indexing, analyzing, and automating the interpretation of images—a lineage that begins not with art but with science. Since the 19th century, photography has served far more extensively in medical, scientific, criminological, astronomical, and cartographic contexts than in aesthetic ones. These applications did not seek expression but evidence; not subjectivity but data.

Anthropometric photography, X-ray imaging, aerial surveillance, and microscopic photography are all examples of image-making systems designed to reveal, classify, and control. These images were integrated into systems of power and knowledge: they constructed visibility as a form of authority. What AI does is not break from this legacy, but radicalize it. It absorbs this long history of instrumental image use and extends it into new domains. AI not only reads images—it correlates patterns, derives inferences, and generates visual outputs from statistical models.

In this regard, AI is the direct descendant of scientific photography. It inherits its epistemological structure: the image as calculable information, as a proxy for truth. But it also intensifies its reach by automating perception and decision-making at scale. Where once a doctor, scientist, or official would interpret the image, now it is the machine that performs this task. AI systems are trained on vast datasets—many of which are composed of the very photographic archives that defined earlier systems of knowledge and control. In this sense, AI doesn’t simply mimic human cognition; it is informed by centuries of accumulated visual data.

More than aesthetic instruments, both photography and AI operate as epistemic technologies. They are embedded in sociotechnical networks that structure visibility, encode hierarchies, and mediate access to truth. As Michel Foucault might suggest, they function as dispositifs—apparatuses through which knowledge and power circulate. What differentiates AI is the velocity and scale of its inferences, and the opacity of its operations. This makes the critical examination of its visual logic more urgent than ever.

The computational gaze of AI is not a deviation from photographic modernity—it is its algorithmic apex. Therefore, we must understand AI-generated imagery not as a rupture, but as an expansion of a pre-existing visual regime rooted in measurement, surveillance, and abstraction.

Genealogies and Disruptions: Undoing the Modern Myth of the Image

What is proposed here is that the critique of AI must not be built on an idealization of photography. Instead, we must recover the history of images as a history of power, codification, and abstraction of the human gaze. Only by situating AI within this longer visual genealogy can we see that what is at stake today is not the end of truth, but the continuation—by other means—of a visual regime initiated long ago.

This genealogy becomes even more compelling when we consider the argument advanced by David Hockney in Secret Knowledge (2001), later supported by Charles Falco, that optical systems such as lenses and the camera lucida were in use by painters centuries before the formal invention of photography. These technologies of light manipulation were already systems for generating and interpreting images—proto-photographic in function if not in name. From the moment painting began to rely on such devices, it became entangled in the logic of image-making that would later be formalized and chemically stabilized by photography.

When painters such as Caravaggio, Vermeer, or Canaletto employed the camera obscura, mirrors, or converging lenses, they were already operating with systems that manipulated light to create visible, albeit ephemeral, images. In those moments, the painter’s hand became a kind of mechanical appendage following the projection of light—essentially enacting the principle of pre-chemical photography. Photography as “machine-mediated vision” therefore begins long before the fixation of images on a support. This perspective dissolves the boundaries between painting and photography, between art and technique, between eye and lens—repositioning the invention of photography as the chemical stabilization of an already-existing process.

If we extend this logic to AI, an even more provocative continuity emerges. Generative AI represents only the latest phase in a longstanding trajectory: the creation of images through systems that no longer rely directly on the human eye or hand, but on increasingly sophisticated mediational processes. The difference is that, now, what is interpreted is not merely light, but pattern, correlation, style, and semantics. In this way, AI is a direct descendant of the camera obscura—only now the mirrors have become algorithms, and the lenses have transformed into neural networks. The gaze remains mediated, but it is now computed rather than projected.

With this in mind, the timeline of visual technologies should no longer be read as a linear sequence of inventions, but as an evolving ecosystem of visual mediations. The technical image does not begin with the daguerreotype, nor will it end with AI. It continues to evolve toward new forms of automatic interpretation—perhaps biochemical, quantum, or cognitive. In this expanded genealogy, the image emerges not as a static object but as a dynamic site of interaction between technologies, epistemologies, and desires.

If we want to be even more radical, we might say that the history of the image begins with the mirror. Mirror-reflection—what we might call “specular genesis”—marks the first technological instance of image reproduction. Before the mirror, the image was an individual abstraction, without awareness of its multiplicity or copyability. The myth of Narcissus offers a compelling allegory here: the image is not a passive replica but a seduction and a trap. Narcissus does not die from vanity, but from the confusion between body and reflection, presence and representation. This, too, is the crisis we face with AI-generated imagery: a confrontation with doubles that are not real, but feel real enough to trouble our sense of self, authorship, and reality itself.

Seen from this broader perspective, the image ceases to be a mere instrument of representation and becomes a problem in its own right—a site of ambiguity, illusion, and estrangement. Since the mirror, the image has introduced uncertainty rather than clarity, distance rather than immediacy. It is not a transparent window to the world, but a veil that invites projection and misrecognition. Every technological mediation—be it mirror, lens, film, or algorithm—deepens the gap between object and appearance.

This understanding allows us to articulate a non-linear, expanded genealogy of the image:

  1. Mythopoetic image (mirror, water, shadow) – the image as wonder, myth, and duplicated presence.
  2. Optical image (camera obscura, lenses, technical mirrors) – the image as projection and manipulation of light.
  3. Fixed image (photography, cinema, video) – the image as record, archive, and memory.
  4. Computed image (AI, CGI, XR) – the image as calculation, inference, and technical fiction.

This chronology is not sequential but concentric: each new regime of the image reactivates earlier layers. In this sense, the AI-generated image contains within it the shadow of Narcissus, the optics of Vermeer, and the archive of modernity. To understand AI critically is to revisit these thresholds—not to mourn a lost truth, but to sharpen our awareness of the image as a symbolic, ethical, and affective force that must be interpreted, not consumed.

The Image as Sacred, Political, and Affective Force

Beyond its technical and historical dimensions, the image occupies a potent symbolic space across religious, political, and affective domains. In religious traditions, the image often held an ambiguous status—at once a conduit to the sacred and a potential idol. In Byzantine Christianity, the icon was not a mere representation but a window into the divine: it enacted presence rather than depicted it. Conversely, iconoclasts feared this power, destroying images to preserve theological purity and prevent confusion between essence and representation. In Islamic traditions, the prohibition on figural imagery aimed to guard the divine from being reduced to visible form.

These dynamics resurface today with AI-generated imagery. The fear is not only of falsehood but of the image revealing too much—too freely, too impersonally, too vividly. Deepfakes, synthetic idols, and hallucinated likenesses provoke a modern iconoclasm grounded in concerns about authorship, authenticity, and control.

The image has also been a powerful political tool. From royal portraiture and minted effigies to propagandistic cinema, images have long served to naturalize authority. In the 20th century, totalitarian regimes used visual media to engineer public affect and perception. Today, algorithmic platforms curate our visual experiences, shaping not just what we see, but what we can imagine. AI participates in this visual regime by producing images on ideological demand, often trained on datasets riddled with historical and cultural biases. In this way, AI does not merely show the world—it actively decides what counts as visible.

On the affective plane, the image has always been a site of projection and memory. A photograph of a loved one, a selfie, a spectral face in a video call—these all collapse time and space to produce presence. But what happens when such presences are generated by no one? Can one love an image made by an algorithm? Can a synthetic image produce grief, desire, nostalgia? The affective force of the image resists its ontological grounding. It moves us not because it is true, but because we choose to believe in its resonance.

Iconophobia: Between Power and Panic

One of the emerging facets of contemporary iconophobia is the symbolic threat posed by the proto-autonomy of AI systems. Generative AI, particularly in the production of images and texts, appears to invade the final stronghold of human singularity: the imagination. While calculation, strength, and memory have long been delegated to machines, creativity remained, for a long time, the sacred domain of the human subject. Now, in the face of convincing outputs from systems without consciousness or intention, many experience not aesthetic awe, but an ontological crisis: if a machine can produce this, what remains of my desire, my gaze, my labour?

What we are witnessing is a displaced form of iconophobia: the synthetic image is not feared for being false, but for being plausible—for its ability to operate without us. It is the symbolic collapse of authorship and a confrontation with irrelevance. What truly unsettles is not falsification, but efficacy.

An interesting contrast arises when we consider medical imaging. Technologies such as CT scans, MRIs, and X-rays do not provoke iconophobic panic, but technophilic trust. Here, the image is not expression but data. The aim is not to communicate, but to diagnose. Algorithms, in this context, do not compete with human subjectivity—they appear to enhance it. The image does not replace the observer—it supports the specialist. Thus, in clinical domains, the more machine-mediated the image is, the more objective it becomes. In this setting, technological mediation does not threaten authorship—it promises the mitigation of error.

In this light, what provokes anxiety in artistic and cultural fields is not the image itself, but the displacement of symbolic labour from the human to the machine. The fear is not so much about disinformation, but about redundancy. AI systems that produce images capable of moving, persuading, or selling—without the participation of a human subject—challenge the deep-rooted associations between creativity and identity. The AI-generated image does not mirror Narcissus; it evokes Prometheus: it shows us that the fire of creation can be stolen from us—and used without us.

This symbolic power helps explain why images so often provoke fear, suspicion, and violence. Iconophobia—the fear, rejection, or destruction of images—has recurred throughout history in religious, political, and aesthetic forms. As David Freedberg (1989) demonstrates in The Power of Images, images often elicit intense emotional reactions, leading to both veneration and iconoclastic fury. On the political level, Dario Gamboni (1997) explores how the destruction of monuments becomes a form of visual resistance—a gesture to erase or challenge dominant narratives.

Bruno Latour (2002) reframes this dynamic through the concept of iconoclash, a situation in which it is not clear whether an image should be destroyed or preserved. Instead of a binary between iconoclast and iconophile, iconoclash acknowledges the ambivalence of images as agents within the social field. Similarly, W. J. T. Mitchell (2005) argues that images have “desires”—they act, speak, and demand interpretation. In the context of AI, this animacy gains new urgency, as images proliferate without origin, author, or stable referent.

In digital culture, iconophobia manifests as suspicion of manipulation—especially in the form of deepfakes or algorithmic hallucinations. The threat is not only falsification, but the loss of stable referents—the dissolution of the contract between image and world. Susan Sontag (2003) reminds us that even truthful images, particularly of suffering, can desensitize rather than mobilize. Her work invites us to reflect on the ethics of looking and on the fatigue of seeing too much.

Ironically—and as previously noted—in certain domains such as medical imaging, algorithmic interpretation is welcomed. Here, the machine does not provoke iconophobic rejection, but iconophilic trust. The image is entrusted to the system—and diagnosis, to the image. The difference lies in the kind of power at play. In creative fields, AI threatens authorship; in clinical contexts, it promises efficiency. The cultural response is shaped not only by what images show, but by what we fear they might replace.

To further enrich this perspective, José Bragança de Miranda (2020) offers a powerful diagnosis of the current condition of images, describing a process of the “liberation of the image” that unfolds alongside the broader dissemination of prosaic forms in culture. For Bragança de Miranda, images have escaped their classical frameworks—religious, aesthetic, or metaphysical—and now circulate erratically, within a regime of abundance and speed. This shift signals a deeper transformation: the image, like language, has entered a state of prosaic freedom, no longer bound to sublime origins or authorities. He proposes the notion of the “prose of images” to describe this new condition: a situation in which the image, far from being a rare or elevated object, becomes a common, mobile, and politically charged fragment within an economy of visual exchange.

Bragança de Miranda traces this shift across multiple registers: the historical decline of iconic authority, the proliferation of vernacular and digital images, and the central role of television in the desacralization of the image through its logic of circulation and live presence. The contemporary fear of AI-generated images can thus be understood as part of a deeper resistance to this liberation—a desire to restore hierarchies within a landscape of visual excess. The anxiety between “good” and “bad” images is symptomatic of a broader desire to regain control over meaning, authorship, and visual legitimacy. Yet, as Bragança de Miranda suggests, we now live in the era of the image’s prosaic freedom. Like the liberated phrase in literature, the liberated image resists monumentalization and opens space for a new form of common visuality—open, decentralized, and ethically urgent.

As Bragança de Miranda (2020) states, “the image has become prosaic, a piece of communication, a mobile sign without aura or exceptionality.”

In a complementary argument, he contends that we should extend the idea of prose beyond verbal language to include images themselves. This “prose of images” represents a form of resistance to the logic of control—a banal and unruly circulation that escapes codification. To wish to control images, he suggests, is already to generate and fuel the very will to control. Art, therefore, is not guaranteed by “good” or “beautiful” images, nor by genres or classifications, but by the enigmatic act of making-image. This opens the possibility of a common image—an image of the commons—that can only emerge once it has been freed from institutional constraints.

The Image as Ontological Tension

In light of these layered genealogies and symbolic functions, it becomes clear that the image has never been a neutral entity. From mirror to machine, from icon to interface, the image has always mediated between presence and absence, power and perception, memory and manipulation. AI does not inaugurate a new regime of vision—it radicalizes the ongoing, centuries-old processes through which images have been encoded with authority, fear, affection, and abstraction.

What changes today is not the structure of the image, but our position relative to it. The synthetic image confronts us with our own contingency—as viewers, as authors, as meaning-makers. It reveals the instability of the visual field while demanding new forms of critical literacy. In doing so, it offers not only a challenge, but an opportunity: to rethink authorship, to renegotiate truth, and to reinhabit the image as a space of complexity rather than certainty.

This also opens the door to posthumanist reflection, where the boundaries between human and non-human cognition are increasingly porous. Authors such as Donna Haraway, N. Katherine Hayles, and Rosi Braidotti have argued that our notions of agency, identity, and embodiment must evolve alongside our technologies. Haraway’s (1991) cyborg is no longer science fiction but a metaphorical figure for our hybrid entanglements. Hayles (1999) insists that the posthuman is not the end of humanity, but the transformation of subjectivity under the conditions of computation and code. For Braidotti (2013), the posthuman condition invites an affirmative ethics—one that reimagines humanity not as sovereign, but as relational, ecological, and technologically embedded.

In contrast, thinkers like Yuval Noah Harari have brought AI into mainstream cultural debate through accessible historical narratives. Harari warns of a future where algorithms surpass human decision-making capacities, suggesting the emergence of ‘dataism’ as a new dominant ideology. While thought-provoking, this perspective often veers toward technological determinism, presenting AI as an unstoppable force. Such framing risks depoliticizing the discourse by obscuring the human choices embedded in technological development. Harari’s emphasis on the ‘death of free will’—claiming that machines know us better than we know ourselves—can also appear reductive, neglecting the complex social, cultural, and affective dimensions of human agency. Moreover, his reflections tend to remain outside the deeper technical and philosophical debates on algorithmic culture, autonomy, or visual epistemology. He does not engage with foundational figures in the philosophy of technology such as Bernard Stiegler, Yuk Hui, or Benjamin Bratton, whose works offer nuanced critiques of computational infrastructures and their ontological consequences. Nor does he enter the critical discourse on algorithmic governance and data politics shaped by thinkers like Louise Amoore and Safiya Noble. Similarly, Harari’s writing avoids the complex visual theories developed by authors such as Vilém Flusser, Harun Farocki, W. J. T. Mitchell, and Ariella Azoulay, who have critically interrogated the political, aesthetic, and epistemic dimensions of the technical image.

Against this backdrop, the task is not to fear AI’s symbolic power but to situate it critically. AI is not an autonomous agent—it is a mirror of our own projections, blind spots, and desires. The fear of AI replacing us reflects a deeper uncertainty about what it means to be human in an era when creativity, perception, and memory are all mediated by machines. The challenge is not only to interpret these transformations, but to intervene in their conditions of production with epistemic humility and speculative imagination.

We must resist the urge to respond to this complexity with nostalgia or panic. Instead, what is needed is an ethics of visual engagement: one that acknowledges the image’s power without surrendering to its illusions, and that embraces mediation as a condition of perception rather than a fall from it. The task is not to restore lost certainties, but to develop new literacies—historical, aesthetic, and technical—that can make sense of images in motion, and of selves in relation to them.

The image, in the age of AI, returns us to its oldest paradox: that to see is never merely to witness, but to interpret; and that behind every image lies not just a scene, but a system. What we make of these images—and what they make of us—will depend not on the machines we fear or celebrate, but on the gaze we cultivate: lucid, situated, and radically responsible.

References

Bragança de Miranda, J. (2020). Para uma prosa das imagens. Comunicação e Sociedade, 37, 11–22. https://doi.org/10.17231/comsoc.37(2020).2867

Foucault, M. (1977). What is an author? In D. F. Bouchard (Ed.), Language, counter-memory, practice: Selected essays and interviews (pp. 113–138). Cornell University Press. (Original work published 1969)

Mondzain, M.-J. (2005). Image, icon, economy: The Byzantine origins of the contemporary imaginary (R. Krauss, Trans.). Stanford University Press.

Braidotti, R. (2013). The Posthuman. Polity Press.

Harari, Y. N. (2016). Homo Deus: A brief history of tomorrow. Harper.

Haraway, D. J. (1991). Simians, cyborgs, and women: The reinvention of nature. Routledge.

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press.

Azoulay, A. (2008). The civil contract of photography. Zone Books.

Benjamin, W. (2008). The work of art in the age of its technological reproducibility and other writings on media (M. W. Jennings, B. Doherty, & T. Y. Levin, Eds.). Harvard University Press. (Original work published 1936)

Chun, W. H. K. (2011). Programmed visions: Software and memory. MIT Press.

Crary, J. (2013). 24/7: Late capitalism and the ends of sleep. Verso.

Flusser, V. (2000). Towards a philosophy of photography (A. Mathews, Trans.). Reaktion Books. (Original work published 1983)

Freedberg, D. (1989). The power of images: Studies in the history and theory of response. University of Chicago Press.

Gamboni, D. (1997). The destruction of art: Iconoclasm and vandalism since the French Revolution. Reaktion Books.

Hockney, D. (2001). Secret knowledge: Rediscovering the lost techniques of the Old Masters. Viking Studio.

Latour, B., & Weibel, P. (Eds.). (2002). Iconoclash: Beyond the image wars in science, religion and art. MIT Press.

Mitchell, W. J. T. (2005). What do pictures want? The lives and loves of images. University of Chicago Press.

Sekula, A. (1986). The body and the archive. October39, 3–64.

Sontag, S. (2003). Regarding the pain of others. Farrar, Straus and Giroux.

Tagg, J. (1988). The burden of representation: Essays on photographies and histories. Palgrave Macmillan.Tagg, J. (1988). The burden of representation: Essays on photographies and histories. Palgrave Macmillan.

Leave a comment