AI e legge

Artificial Intelligence has come a long way in recent years: growing in possibility, skill, and prevalence, it has become part of our everyday lives, influencing our habits.

By private initiative, algorithms now regulate our behavior on social networks, preferences for movies and games, the ability to apply for a mortgage or loan, the ability to succeed in a job interview, and so on.

Fascinated and at the same time wary of this new technology, the public and institutions now have to balance the countless benefits of AI with the risks it may pose to our rights.

It has recently arisen the need, shared between the European Union and the United States, to regulate the use of Artificial Intelligence on several fronts, lest private individuals end up with more power and responsibility in their hands than they can really control.

In this in-depth look at the features of the most recent regulations – the AI Bill of Rights and the AI Act, both of which are currently non-binding – which are already expected to become the benchmark for Artificial Intelligence regulation in the world in the coming years.

AI Bill of Rights

The AI Bill of Rights does not yet constitute a legislative proposal, nor does it mention penalties or sanctions for automated systems that do not comply with these rules: rather, it is intended as a set of non-binding recommendations to companies and government organizations that intend to take advantage of or are already making use of Artificial Intelligences.

The document outlines 5 principles by which to regulate the design and implementation of Artificial Intelligence: the text refers specifically to the U.S. public sector, but the guidelines it contains can also be applied in other contexts where it is used.

1 - Safe and effective systems

Citizens should be protected from the use of potentially unsafe or ineffective systems that can do harm to the individual and/or the community.

For this reason, automated systems, even before they are implemented, should undergo independent testing to assess their effectiveness and safety.

2 - Protections against algorithmic discrimination

Artificial Intelligences should be designed and used fairly, taking preventive measures to prevent the risk of algorithmic discrimination.

"Algorithmic discrimination" is defined as unequal treatment by an automated system on the basis of ethnic, social or cultural criteria.

3 - Protection of personal data

Automated systems should by design contain options regarding the protection of personal data and privacy of users and should collect only the data strictly necessary for their operation.

The point also refers to the need to establish explicit consent from the user, not unlike what is done in the European Union with the GDPR.

4 - Notices and explanations

One should always know if and when one is interfacing with an automated system and what the impacts, if any, of the interaction are.

Transparency in the use of Artificial Intelligence includes the need to make explicit not only the presence of the technology in question, but also how it works, in clear language that is accessible to as many people as possible.

5 - Human alternatives, evaluations and reservations

Citizens should always be able to choose a human alternative to an automated system, in ways in which this choice is feasible and appropriate.

The presence of oversight and monitoring figures is particularly recommended in areas considered "most sensitive": criminal justice, human resources, education, and health.

AI Act (European Union)

The AI Bill Of Rights represents an important signal from the United States, the world superpower and cradle of Big Tech, but from the perspective of AI regulation, the European Union has already taken several steps forward.

The AI Act, presented in the spring of 2021 and currently under discussion between the European Parliament and member states, is a piece of legislation that aims to apply the principle of transparency and respect for human rights to the design and use of Artificial Intelligence.

The goal of the AI Act is to regulate the entire sector of the production and deployment of automated systems on the European territory, in accordance with existing legislation in the member states and the General Data Protection Regulation (GDPR).

The AI Act has several points of contact with the more recent AI Bill Of Rights, from which it differs in that it has a more regulatory slant: in fact, it includes a prior registration requirement for this type of technology and a ban on the use of types of Artificial Intelligence deemed "unacceptable risk."

What are high-risk AIs?

The text of the AI Act divides AIs into four classes of risk, calculated proportionally based on potential threats to people's health, safety, or fundamental rights.

Unacceptable risk

Artificial Intelligences that make use of practices such as profiling for coercion or social scoring purposes, or use subliminal techniques, i.e., distort people's behaviors to cause physical or psychological harm, fall into this category.

Unacceptable risk Artificial Intelligence systems are to be considered prohibited, as they contravene in their operation the values of the European Union and fundamental human rights such as the presumption of innocence.

High risk

Artificial Intelligence systems that have the potential to significantly affect the course of democracy or individual or collective health fall into this category.

Examples of high-risk Artificial Intelligences are:

  • Systems used in education or vocational training for the evaluation of tests or access to institutions;
  • The systems used to assign decisions in labor relations and in credit;
  • Systems intended for use in the administration of justice and crime prevention, detection, investigation and prosecution;
  • Systems intended for use in the management of migration, asylum and border control.

The AI Act pays special attention to high-risk applications of Artificial Intelligence. These will be allowed to enter the market, but only if they meet a set of mandatory horizontal requirements that ensure their reliability and have passed several conformity assessment procedures.

Limited risk

This category includes systems such as chatbots or deepfakes, which may originate a risk of manipulation when the nature of the conversational agent is not made clear to the user.

For AI systems considered low risk, the act imposes a code of conduct on manufacturers based on transparency of information shared with the public, who must be aware at all times that they are interacting with a machine.

Minimal risk

The vast majority of expert, automated and Artificial Intelligence systems currently in use in Europe fall into this category.

For AI systems considered to be minimal risk, the regulations leave vendors free to adhere to codes of conduct and reliability on a voluntary basis.

AI Liability Directive: toward regulation

At the end of September 2022, just days before the AI Bill Of Rights was published overseas, the European Commission released the AI Liability Directive, a proposal on the legal responsibilities of Artificial Intelligence.

In other words, this document is a first step toward enforcing legal measures against individuals or entities that suffer damages related to the use of this type of technology.

In the AI Liability Directive, the European Commission also divides the assumption of legal liability among several actors: first and foremost, it will fall on the companies that make Artificial Intelligence available, but it will also involve other actors in the entire supply chain, not least the users themselves.

Conclusion: is it right to limit innovation?

It is never right to limit innovation, and moreover, blocking the progress of a technology is never the purpose of well-written norms and laws.

Norms live in the culture and history in which they are written, follow its sensibilities, and simply direct technologies toward the most felt needs of the moment, limiting the dangers of creating harm to society.

In fact, it is not forbidden-to take one example-to research new therapies and medicines through genetic technologies; on the other hand, it is forbidden to clone a human being.

Artificial Intelligence will be no exception: as evidenced by the proposals put on the table in recent years by the European Union and the United States, in the near future this technology will be subject to rules that will lead manufacturers to take the necessary responsibility for the products and services they put on the market.

The intent driving these measures is to preserve individual and collective freedoms. It will only be possible to innovate without risking people's freedoms through third-party reality checks, in light of the contemporary and through contributions from different areas of expertise-from pure science to law, via data science and the humanities.

We at Neosperience are explorers of innovation. What has guided us in the development of our Artificial Intelligence algorithms, to the analysis of user behavior to the simplification of business processes, is the desire to bring people and organizations into a more human and empathetic digital environment.

Receive updates from Neosperience:

Web 3D and its opportunities: interview with Dario Melpignano

Web 3D

On September 20, 2022, Confindustria Brescia hosted an in-depth discussion on the "relationship between Metaverse and Web 3D" for B2B and B2C manufacturing industries.

During his speech, Dario Melpignano, CEO of Neosperience, delved into the logics of Web 3D and the relationship with the Metaverse, offering as an example operational projects such as the one realized for Colombo New Scal SpA, a Lecco-based business active in home appliance manufacturing.

We hear from all sides that the Metaverse is coming: what does this statement mean?

The Metaverse is definitely coming, but at the same time it is still morphing. Nowadays, the Metaverse is akin to the Internet in the mid-1990s, when the Web was still consolidating itself in its protocols and core components.

Today the Metaverse exists in a nutshell, in prototypical, game-like forms such as Decentraland or Facebook Horizon, but no one yet knows what form it will take in the future.

Even so, however, we can already establish some of the features of the Metaverse. First and foremost, it is part of Web3, that is the most recent iteration of the Web. Web3 is based on the idea of a pervasive Internet, whereas Web2, which was born when smartphones first hit the market, is instead limited by the size of device screens.

In the Metaverse, as well as in Web 3D, the three-dimensionality of experiences plays a central role: thanks to Virtual Reality and Augmented Reality, combined with blockchain-based payment ecosystems, a new way of experiencing the digital, but also "analog" reality, will rise.

In this regard, what is the difference between Metaverse and Web 3D?

The concept of Metaverse, at least in its current form, describes open social platforms where users can interact and create their own spaces with different devices such as Virtual Reality or Augmented Reality headsets.

3D, as in the ability to represent three-dimensional objects or spaces, is one of the enabling components of the Metaverse, but it has been around for a long time.

In contrast, Web 3D, not to be confused with Web3 mentioned earlier, is a three-dimensional evolution of the Web that poses many more opportunities in business terms.

While the Metaverse still operates on a "gamified" logic that is still difficult to interpret from a business perspective, except for some high-level branding operations, Web 3D opens up the possibility for companies to create a proprietary brand space in which to share content that represents a digital equivalent of their products or services.

Specifically, in the context of Web 3D, Augmented Reality plays a crucial role, since it doesn't replace the real world with an alternative, but it enhances it with an additional "layer" of information and content.


And what are instead the shared technologies between Metaverse and Web 3D?

There are many differences between the Metaverse and Web 3D, but there are just as many contact points that unite these new digital frontiers. There are seven categories at play in this field:

  1. Hardware
  2. Networking
  3. Computing power
  4. Virtual platforms
  5. Interoperability standards
  6. Payment systems
  7. CRI: content, resources and identity services.

On one hand we find a set of experiential fruition devices for experiencing a product or service, learning about a company's product in a virtual world and in an open vision ready for any reality.

On the other hand, there are technologies that take advantage of the scarcity of the digital asset and enable a new kind of commerce, where rights are acquired by navigating a digital environment in Augmented Reality.

However, all these elements can and will have to converge toward an open, free and democratic vision: the strength of the early days of the Internet, before the concentration of power around social platforms.

We hear a lot lately about Web3: now that we have differentiated it from Web 3D, can we explicate this concept as well?

I realize that the names assigned here are not at all helpful: Web3 is the latest evolution of the Web, encompassing and integrating many of the technologies we have just listed.

It should not be confused with Web 3D, which instead focuses on the aspect of three-dimensionality and the use of experiential fruition devices, as well as the enabling of scarcity of digital assets for a new kind of commerce, but can also enhance traditional business processes.

Web 3D

Back to Web 3D, can you give us some examples of how to strategically use this technology from an enterprise perspective, in both the Direct-to-Consumer and Business-to-Business worlds?

In today and tomorrow's businesses, it will be increasingly necessary to establish a direct and ongoing relationship with customers, developing interactive spaces within which they can experience the products and services. Kinesthetic learning, as shown by studies conducted on "learning by doing", is a very powerful tool: in fact, we remember up to 70% of what we experience, compared with 30% of what we see.

I'm going to mention three examples that are representative of the concepts we have talked about in this interview, which further demonstrate how scalable Web 3D is, from large international brands to small realities, through of course Made in Italy SMEs.

Speaking of the latter, a Web 3D project was implemented for Colombo New Scal, a historic manufacturer of home applications based in the province of Lecco, which allows products to be shown within an Augmented Reality environment.

In this way, prospective customers can not only visualize the chosen object in 3D, but also manipulate and place it within the real environment in a simple and immediate way.

This type of application of Web 3D allows companies to connect with end customers, but from a B2B standpoint it also allows stakeholders to get to know the company and product features without the need for physical interaction, thus overcoming distances and barriers.

Another significant Web 3D experience is the one carried out for Haier, a leading Chinese brand in the home appliance field. Thanks to the Neosperience Reality Plus platform, which combines virtual and augmented reality, a real virtual showroom was created, a "Home of the Future" the customer can navigate and explore.

Finally, this experiential evolution of the catalog is also applicable to small businesses: a 3D configurator of frames and lenses was developed for Radius, an optician brand, combining the commercial purpose with a combination of fun, verification and validation for the consumer.

Finally, it is imperative to mention the use of Virtual Reality in the medical field: the Johns Hopkins University has developed a technique for remote spinal surgery using the rendering technology of a video game engine and 5G connectivity.

Several companies, such as in the fashion industry, have already branched out into the Metaverse. How can a company today best use these technologies, particularly in commerce?

In an increasingly complex historical context, where companies are faced with the need to build and maintain a solid community and be resilient in the face of the many crises, the future will belong to companies that are able to establish a direct and ongoing relationship with customers.

With this in mind, the Metaverse, in the forms it will take in the near future, can be the environment where this community meets, but each company has to develop their own Metaverse, coming into direct contact with their customer base.

The key to overcoming the limitations of a technological world that has so far favored efficiency over effectiveness is empathy.

This translates into a series of best practices: putting the relationship with the customer community at the center, using the most advanced technologies to evolve toward new business models, and understanding the psychological needs of the customer without manipulating them.

Metaverse and Web 3D

And can this happen to B2B companies as well?

Business-to-Business collaborative processes can also benefit from this evolution and become tighter thanks to Web 3D: one can quickly share a prototype with one's buyers, remotely visit an industrial plant, present products interactively, and in some cases train the customer on how to use them.

The key, in this digital transition process in the business world, is to skillfully use Artificial Intelligence and Machine Learning.

Marshall McLuhan already argued this, 60 years ago: "improvements in communication [...] make for increased difficulties of understanding." Too much information is the same as no information: data must be processed and interpreted in order to provide a meaningful benefit.

Before we end this interview, what are your wishes for the future of business in the digital world?

The concentration of power in the digital world has gone overboard, people's online happiness is not where it should be, and algorithms have become too powerful in shaping society's opinions. If we want to affect change toward a better future, for our customers and for our companies, the time to act is now.

We use a technology hat we either do not know, or we learn in a self-taught and limited way. Overcoming the neopositivist approach of Silicon Valley is possible, as studies from MIT and Copenhagen School of Business show on a conceptual level.

One way to do this in practice is to decline technology in its various applications, bringing together – in both academic training and human resources – the STEM disciplines and the cultural capital of the Mediterranean and its Humanities.

As we move from Web2 to Web3, it is time for us as entrepreneurs and beyond to inform and educate ourselves.

Having a clear idea of what is around the corner is essential in order to get informed: education allows us to positively influence the world we live in, the digital as well as the analog one, in which we interconnect in that one formidable experience we are allowed to experience that is our life.

Receive updates from Neosperience:

Overview Effect: an empathic, collective and universal experience

Overview Effect

In 1632, in the Dialogue Concerning the Two Chief World Systems, Galileo Galilei brought to support the theory of heliocentrism the belief that "If you could see the earth illuminated when you were in a place as dark as night, it would look to you more splendid than the moon."

In 1968, more than three centuries later, the U.S. Apollo 8 space mission reached the orbit of the Moon. This was not the occasion when astronauts landed on our satellite for the first time – the moon landing would not occur until a year later –  but something surprising happened nonetheless.

At one point in the live telecast astronauts Jim Lovell, Frank Borman and William Anders turned the camera around and framed the Earth, implicitly confirming what Galilei had said. The resulting shot, dubbed Earthrise, is believed to be one of the most significant photographs in human history.

Earthrise: Overview Effect

During the Apollo 8 mission, for the first time three human beings observed our Planet from space, feeling profound emotion at the beauty and fragility of Earth, a blue pearl lost in the infinity of the universe.

Returning from the mission and approaching the atmosphere, the experience of the Apollo 8 astronauts got more intense. Before their eyes unfolded expanses of cyclones and disturbances, but also cities lit up in the night, coral reefs and northern lights - the same view that, decades later, astronauts on the International Space Station still observe today.

In 1987, author and philosopher Frank White gave a name to the specific feeling of wonder one experiences when observing Earth from space: the Overview Effect.


What is the Overview Effect?

The term "Overview Effect" defines the cognitive change in perception felt by astronauts and cosmonauts during space missions.

Despite never having experienced space exploration firsthand, White collected interviews with 29 astronauts in a volume titled The Overview Effect. The vast majority of astronauts witnessed a disruptive change in perspective during a space voyage.

Such a radical experience changes the perception of our planet forever. The statements collected by White mostly emphasize the sense of unity and interconnectedness among living beings, the need to appreciate and care for our "home."

Astronaut Jim Lovell, a member of the aforementioned Apollo 8 mission, points out how, from the Moon, Earth looks like "a great oasis compared to the great vastness of space," while cosmonaut Aleksei Leonov perceives the planet as "our home that had to be defended like a sacred relic."

From above, the boundaries and barriers that define life on Earth are invisible. In the holistic view that determines the Overview Effect, our planet reveals all its magnificence and, at the same time, all its fragility.


Pale Blue Dot

Space probes reach distances that for the moment are still unthinkable for astronauts: the images they manage to capture show the Earth as a small, fragile sphere packing all life known to date, "hanging" in sidereal space, wrapped in a thin atmosphere layer that protects it from the external environment.

In 1990, astronomer and author Carl Sagan successfully got the Voyager 1 space probe to turn its camera around and take a photograph of Earth from as far as 6 billion kilometers away before leaving the Solar System.

In the resulting shot, dubbed Pale Blue Dot by Sagan himself, our planet is nothing more than an imperceptible speckle, less than a pixel wide, lonely and microscopic-looking in space, bathing the Sun's reflection off the probe's camera.

Pale Blue Dot

In his 1994 text of the same name, Sagan reflects on this very shot, which elicits an indirect, amplified version of the Overview Effect:

"The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar," every "supreme leader," every saint and sinner in the history of our species lived there-on a mote of dust suspended in a sunbeam."

The smallness of our planet in the vastness of space simultaneously amazes and frightens observers.

The need to create a planetary society with a united will to protect and give a future to this "pale blue dot" in space, preserving the environment and transcending social boundaries and barriers, becomes evident and imperative.


In space with Virtual Reality

So far less than 600 people in the history of mankind have been able to directly experience the Overview Effect by observing the Earth from space. Even the most significant shots, in their two-dimensionality, do not come remotely close to the holistic, all-encompassing experience these people have been privileged to experience.

Advances in technology in the field of Virtual Reality (VR), however, are giving an increasingly broader audience the opportunity to experience the Overview Effect through immersive experiences.

VR Overview Effect

Virtual Reality technology, due to its controllability and ability to provide a sense of presence, is a unique medium for designing and developing publicly accessible experiences.

Virtual Reality experiences inspired by space exploration, especially when combined with sound effects and mindfulness practices, have been shown to elicit a deep emotional response of wonder and awe in the audience. In other words, if you want to experience the Overview Effect, you don't have to be an astronaut nor do you have to wait to become a space tourist.


Overview Effect: a radical change in perspective

The Overview Effect shows that a change of mindset often stems from a radical change of perspective. By shifting from the individual to the universal, from the singular to the collective, rising above the boundaries and barriers of everyday life, we rediscover what makes us human.

Bringing this mindset into business processes is the first step in ensuring the creation of truly empathic products and services.

For years, Neosperience has been guiding companies through digital transformation with an ecosystem of empathic technology solutions that meet and anticipate the needs and wants of the customer base.

We help companies innovate their business strategies with a data-driven approach, giving them the strategic key to build the present and future of their company in the marketplace.


Neosperience at Futura Expo 2022

Neosperience believes in the potential of VR technology, which up until today has only been partially exploited by brands in order to deliver truly immersive experiences.

After winning projects that opened up the potential of Virtual Reality in the e-commerce field, we brought our Overview Effect experience to Futura Expo, the event dedicated to a vision of the future in which Man, Nature, Environment and Economy coexist in harmony.

Futura Expo

From Sunday, October 2 to Tuesday, October 4, visitors to Futura Expo 2022 have been able to experience a Virtual Reality experience simulating Earth observation from space by means of Oculus Quest 2 visors at our booth.

With this choice we set out to demonstrate that Virtual Reality is not a "stylistic exercise," nor a fad destined to be soon supplanted, but rather a useful tool to offer new points of view to people and allow them to experience situations that are out of this world.

Receive updates from Neosperience:

Generative Art: prospects and limitations of “creative” AI

Cover image: Dall-E Mini, “Generative Art About Empathy In Blue Tones”, digital medium, 2022 

From the bizarre juxtapositions of images created by Dall-E Mini to the NFT market: images generated by AI algorithms are increasingly becoming mainstream. At the same time, this close intersection between art and technology raises several questions.

Can a machine generate works of art autonomously? If so, what is the future of artistic production when it is no longer exclusive to humankind? What are the limits and risks, but also the potential, of this kind of art?


What is Generative Art?

Generative Art is a type of art, mostly visual, based on cooperation between a human being and an autonomous system. An "autonomous system" is by definition a software, algorithm or AI model capable of performing complex operations without the need for programmer intervention.

Randomness is a fundamental property of Generative Art. Depending on the type of software, the autonomous system is able to process different and unique results at every generating command, or it can return a variable number of results in response to user input.

The first experiments in Generative Art date back to the 1960s with the works of Harold Cohen and his AARON algorithm. Cohen was the first to use stand-alone software to generate abstract artworks inspired by Pop Art silkscreens. Cohen's works are now on display at the Tate Gallery in London.

Another attribute of Generative Art, although one that is becoming less and less of a prerogative, is the repetition of patterns or abstract elements provided by the programmer and implemented within the software code.

The development of increasingly complex neural networks that operate on text-image association has led to generative models capable of creating increasingly realistic and accurate images. The best known example of this type of Generative Art is Dall-E.


Dall-E and CLIP: a revolution in image recognition

Dall-E is a multimodal neural network based on OpenAI's GPT-3 deep learning model. This system is capable of generating images from a textual description based on a dataset of text-image pairs.

The first version of Dall-E, which was launched in January 2021 and remained the prerogative of a small number of professionals in the field, was a real revolution for this type of generative model, surpassing the innovations of GPT-3 itself.

Dall-E Mini

Dall-E is indeed capable of generating plausible images from a wide variety of sentences and textual prompts, even those characterized by a composite linguistic structure. OpenAI's model is shown to be capable of understanding and implementing:

  • The perspective structure of the image
  • The inner and outer structure of an object
  • Comparisons and sequences between different images
  • The spatial-temporal location of objects.

The accuracy of the results processed by Dall-E proved to be the perfect area of application for another OpenAI solution: CLIP (Contrastive Language-Image Pre-training), an image classification and ranking neural network trained on the basis of text-image associations, such as captions found on the Internet.

Thanks to CLIP's intervention, which reduces the number of results offered to the user per prompt to 32, Dall-E was found to return satisfactory images in most cases. However, the results obtained are low-quality and still show obvious limitations in processing certain types of logical associations between elements, such as their spacial location.


Dall-E Mini conquers the Internet

In the art world, imitation is the sincerest form of flattery. OpenAI never published the code of DALL-E, but it only took a few months before a less refined version of the neural network appeared, based on the same principles of association and combination of images from a database of about 30 million elements.

Enter Dall-E Mini, by American developer Boris Dayma, released on the open-source hosting platform HuggingFace. Available to everyone in the form of a simple web app since the spring of 2022, Dall-E Mini has quickly become, according to Wired, "Internet's favorite meme machine."

The ability to generate 9 low-resolution images from any prompt, even the most bizarre ones, sparked the imagination of users, who had fun creating funny and surreal combinations and sharing them on platforms such as Twitter and Reddit.

Dall-E Mini

In just a few weeks, Dall-E found itself processing about 50 thousand images per day and attracted the attention of users normally uninterested in Artificial Intelligence developments, while providing experts with several insights into the application of these technologies on a larger scale.


Generative Art: limits and self-impositions

The degree of popularity achieved by Dall-E Mini has immediately raised questions about the possible risks that may creep into Generative Art and its outputs, especially those depicting real people and things.

Images processed by Dall-E Mini have an unmistakable appearance: the outlines of subjects are often poorly defined or distorted, and human faces are almost always deformed to the point that they are no longer recognizable. In most cases, therefore, the artificial nature of the generated images is well understood by the user, so as to minimize the likelihood of generating deepfakes with malicious intent.

Nonetheless, the open-source nature of Dall-E Mini and the vast amount of prompts entered by users soon shed a light on the need to regulate the results generated by the neural network. Dall-E's database blocks out the most explicit or violent keywords - a system that, although still imperfect, allows developers to control the results returned to the end user.

On the other hand, as is the case with any Artificial Intelligence, within Dall-E and its Mini version lurk social biases common to the humans who developed these technologies.

OpenAI's neural network, for example, reflects the most superficial stereotypes about the food or population of a place with geographic prompts; Dall-E Mini, on the other hand, only returns images of men at the "doctor" prompt and women at the "nurse" prompt.

Generative Art bias

Back to privacy issues: the possibility that Generative Art could jeopardize the safety of portrayed individuals gets more and more worrying considering the advancement of neural networks, which are now capable of returning higher quality results with more precise details than Dall-E.

Dall-E 2, the second generation of OpenAI's neural network unveiled in April 2022, seeks to reduce these kinds of risks by strengthening the system's filtering rules for training data and accepted keywords. The few professionals who have so far gained access to Dall-E 2 have to meet even stricter standards, at least while the capabilities and limitations of the new technology are still being tested.


Dall-E 2: towards a subscription-based model

As anticipated in the previous section, in a little over a year, progress in the area of Generative Art has been substantial, with Dall-E 2 being able to generate even more realistic and accurate images in four times higher resolution of the first generation.

The improvements in Dall-E 2 mainly focus on the combination of concepts, attributes, and art styles. The neural network can now make various changes to pre-existing images from a natural language description, adding or moving elements within a scene and creating variations from an original subject or artwork.

After an initial period of limited access, OpenAI is ready to release Dall-E 2 in beta to the first million users on the waiting list. Unlike its first version, however, the consortium founded by Elon Musk (among others) and funded by Microsoft is set to adopt a subscription-based model structured on a credit basis.

Specifically, each user of the Dall-E 2 beta will receive a predefined number of credits (50 at sign-up and 15 each following month), each of which will equate to an image generated by the neural network. Once they run out of credits, users will be able to purchase a 115-credit bundle for $15.


Generative Art: current and future applications

From the bizarre creations of Dall-E Mini, ironically shared on the Web, to actual works of art sold at auction for astronomical amounts of money, Generative Art has been reaching an increasingly large audience in recent years.

For the first time, clients will be able to use the generated images for commercial purposes as well as personal. Users on the waiting list, OpenAI explains, already plan to implement the images generated by Dall-E 2 in several types of projects, including some more traditional ones:

  • Children's book illustrations
  • Concept art and storyboards for video games and movies
  • Moodboards for design consultancies.

One of the most fruitful commercial outlets for this type of "digital native" art, however, is undoubtedly the NFT market.

The images generated by neural networks, combined and reprocessed by multimedia artists or proposed as the algorithm generated them, can be uploaded to blockchain and sold on marketplaces such as OpenSea or on platforms for the independent management of own non-fungible tokens such as our NFT Commerce.

On the other hand, the results obtained from neural networks such as Dall-E assume great importance not only for their aesthetic value, but also for their use in a variety of practical applications. It is precisely on image search and recognition that Google has focused its efforts, announcing the development of two AIs that function similarly to Dall-E, Imagen and Parti, neither of which has yet been shared with the public.


Generative Art (?)

The incursion of Artificial Intelligence has opened within art history a chapter that is largely yet unwritten.

In the past decades, Pop Art has brought the seriality of industrial processes within the visual arts, while postmodernism has untied the knots of mass society in an ironic game of combination. Even earlier, Dadaism opposed creative intention with the playful randomness of free associations.

From a cultural perspective, Generative Art inserts another fundamental variable to this chronology: the autonomy of the tool from the author. This raises questions about some essential points.

Authorship of the artwork

Authorship is an open question in the contemporary art world. This is demonstrated by the recent lawsuit filed against Maurizio Cattelan by Daniel Druet, a sculptor who created some of the artist's most famous installations without ever appearing in the credits or catalogs.

If a work of visual art is generated by an AI, does the authorship belong to the AI, the professionals who developed it, or the digital artist who provided the prompt? Indeed, can a dataset of text-image associations be an adequate counterpart to the faculty of imagination?


Subscription models

The production of Generative Art itself also involves business models that are still being defined. The subscription-based model is currently the most widely used in content creation and distribution, but it is also the one that most limits the independence of the medium and the freedom of creators.

With a pen and a sheet of paper an artist can freely create what they want: that is not the case when in order to give voice to his or her creativity the artist must pay monthly or "by use" to a Generative Art platform, which moreover can be restricted and censored by those who manage it.

Subscription models are complex to manage properly precisely because they involve a continuous exchange of value and freedom between the user and the company. We at Neosperience, having carried out projects on the subject with some of the most important companies nationally and internationally, offer our expertise on the subject through both business design work and the development of dedicated digital products.


Unbiased Artificial Intelligence

As we have seen, in order to enhance the potential of Generative Art, we need to make the best use of the specificity of this medium in all its fields of application. More than that, it is essential to design artificial intelligences in an empathetic way. Is it possible to untie our biases as human beings from the code that gives life to the Artificial Intelligences we are developing?

Achieving this goal requires a thorough understanding of the hybrid nature of Generative Art, which calls into question both culture and technology. It will therefore be necessary to bring data scientists and humanists together at the design stage, in order to provide AIs with datasets capable of producing results that are unbiased, yet accurate and representative at the same time.

Receive updates from Neosperience: