Smart Agriculture: seizing the opportunities of a growing sector

Technology shows all its destructive nature when applied to what has always seemed unchangeable. It is the case of agriculture, and its enormous possibilities for growth and development if interfaced with the latest innovations and technological solutions of Smart Agriculture

In fact, it went from a market of 450 million in 2019 to one of 1.6 billion in 2021. This is the figure reported in Research 2022 presented by the Smart Agrifood Observatory.

Going into more detail, the largest investments in the sector are dedicated to connected machinery (47 %), monitoring and control systems for vehicles and equipment (35 %), and then at a distance by management software (6 %), followed finally by remote monitoring systems for land and crops (5 %)

First of all, it should be noted that the technological field that seems to have the greatest application opportunities within Smart Agriculture is IoT (Internet of Things), obviously assisted by other technologies such as Drones, Blockchain, Machine Learning, etc.

Italy invests in Smart Agriculture about 20% of the total of the EU.

But what are the practical advantages of agriculture in adopting these instruments?

In this article we will explain the technological applications within the cultivation processes that can help the farmer manage processes, materials, machinery, human resources, and, of course, the crops themselves more efficiently.


Phytopathology often causes enormous economic losses to the farmer if not recognized in time. To limit the damage, it is essential to discover its onset punctually, while at the same time preventing its start by highlighting the conditions that increase its risk.

How? For example, through the implementation of sensors (IoT) in the fields, on the trees, on the individual fruits, in the drainage channels, and so on, thanks to which it is possible to recover qualitative information on the health of the plants, on the humidity of the soil, on the presence of harmful substances, insects, diseases, etc.

At the same time, by using drones, orthophotos, and 3D images, it is possible to highlight the different parts of crops, which receive less water, grow less, or appear less healthy or sick.

Precision Farming

Another application of IoT, drones, but also Machine Learning, is explained in the concept of “Precision Farming”.

Usually, resources are used according to the schedule or in any case homogeneously in a crop, without considering the intra-field variability and the real needs.

The new technologies make it easy to collect information to carry out selective treatments, saving time, and applied products. Through predictive analysis, made possible by Machine Learning algorithms, the farmer can predict when certain crops and areas will need to be sown, irrigated, or fertilized.

Other Smart Agriculture Applications

But now let's go into the various features, imagining a ready-to-use solution dedicated to Smart Agriculture. What are its practical applications? 

Plot mapping

Thanks to the solution, the operator can easily draw web maps of their plots using the orthophotos of the area as a background. The system also allows you to upload the cadastre maps. It would be possible to map not only the plots but also tare or parts of land destined for other uses.

Georeferencing plants or rows 

In addition to displaying the plots, when the information is relevant for each plant, it is possible to geo-report plants or rows, using the GPS in the field and reporting the information on the map.

Computerization of crop data 

Each plot drawn on the map can be associated with a multiplicity of useful information both during the evaluation and monitoring of the crop and during the sending of data for regulatory compliance

Interface with soil and plant sensors

The solution interfaces directly to IoT sensors positioned in the plots to provide important information about soil and plant response to weather conditions

Weather data display

The application provides weather data from fixed control units or IoT sensors installed directly in the plots. The sensors provide useful information about the microclimate that insists on a certain area, which is essential for the assessment of the possibility of occurrence of certain phytopathologies

Warehouse management for fertilizers and plant protection products

A specific section of the solution would be dedicated to the management of the fertilizer and phytosanitary warehouse. The system automatically displays a counter to monitor the presence of products in the warehouse.

Processing of graphs and statistics

Smart Agriculture produces numerous graphs and statistics that allow you to carry out a specific analysis for various factors, in particular:

  • Forecast modeling: the solution would be able to produce very accurate forecast models for each area, indicating the probability of generation, fertility, and mortality of a pest agent. This information allows knowing promptly the beginning of a phytopathology or the degree of infestation.
  • Optimize the use of products: the Smart Agriculture solution would allow optimizing the use of fertilizers, plant protection products, and irrigated water thanks to the use of distribution maps and soil parameters obtained from soil sampling.
  • Support in the choice of soil sampling points: the platform would support the farmer to choose sampling points through pedological data, orography, and previous sampling.
  • Highlight the presence of localized criticalities: the solution would highlight various criticalities, such as nutrient deficiency,  phytopathologies, or the presence of water stagnation in the soil through vegetation vigor maps. These maps are created from remotely sensed data by satellites or drones.


In summary, the Smart Agriculture solution that we have imagined allows you to:

- Improve and modernize crop management by replacing paper maps and data;

- Optimize data handling and loading times;

- Prevent the onset of disease and contain its spread;

- Optimize management choices;

- Save on the use of products (fertilizers, plant protection products, water);

- Contain the environmental impacts of activities;

- Produce higher quality goods;

- Conduct time comparisons for the management of criticalities in the plots. 

The time has come for agriculture to enter the era of digitalization.

The future is 4.0. also for agriculture: by developing new solutions for Smart Agriculture it is possible to benefit a growing market, which offers opportunities to companies that believe in an ecological and prudent approach to resources.

Receive updates from Neosperience:

Artificial Intelligence and Machine Learning can offer real help against Covid-19. We created a team of experts working with our technologies to develop screening algorithms that support the health system.

With the hashtag #defeatcovid19, we launched the initiative and community defeatcovid19.org to onboard all organizations and experts in Artificial Intelligence. The goal is to identify technological answers that support healthcare departments and doctors in such a difficult time. 

To this purpose, we have already made available our platform and our team of data scientists to organizations and bodies that fight against Coronavirus, joined by the Milan Polytechnic, the first partner of the initiative.

The technologies available include neural networks specialized in identifying specific patterns within images and data correlation models. Patterns can be used to support screening and, subsequently, to make the evaluation of therapies more robust concerning the data collected, thus improving the estimation of the prognosis.

“We are gathering a team of artificial intelligence experts from all over the world - explained Dario Melpignano, President of Neosperience. We made available our Neosperience Cloud platform, Core Edition, offered for free to all public and private and non-profit research institutions, active in the health ecosystem, who will request it for developing novel Covid-19 screening methods. 

Giuseppe Andreoni, coordinator of the TeDH laboratory (Technology and Design for Health) of the Milan Polytechnic and scientific coordinator of the Nestore project, funded by the European research program Horizon 2020, which already sees Neosperience engaged together with 14 European public and private research organizations, is confident of the value of the initiative. “Together with Neosperience, we have created a working group that can develop screening algorithms with which to assist healthcare personnel. The team’s goal is to immediately welcome the contributions of the most expert organizations and data scientists broadly and inclusively, enhancing the efforts of each towards the common good.“ 

A challenge that brings together technical skills and ideal motivations, as the President of Neosperience reemphasizes. “In recent weeks, we have dedicated ourselves to understanding how to be more useful to our community in the difficult situation we are experiencing.

One of the primary needs is to have diagnostic tools available that are quick and easy to integrate into the screening processes. Artificial Intelligence and Machine Learning can provide a contribution in early diagnosis to health systems around the world: to organize operations, plan therapies, and improve efficiency in such a critical moment. “

Dott. Alberto Barosi, Head of Non-Invasive Cardiovascular Diagnostics at Luigi Sacco Hospital of Milan, an expert in the field of diagnostic ultrasound, contributed to the realization of the project. The initiative involves a pool of Covid Hospitals in the provinces of Milan, Bergamo, and Brescia

Neosperience will promote sharing, together with the Polytechnic of Milan and the other partners who are joining (at the moment, the 14 partners of Nestore European Consortium and Value China). The results achieved will remain as shared property of the scientific world. 

The data and models, together with the developed methodologies, will both be made public on open-source tools such as Github and made available to Italian and foreign research groups that request them, thus enhancing the tools to support diagnosis and treatment. Data collected anonymously in compliance with privacy legislation.

The data scientists and the organizations that want to deepen and join the project can visit the website:


Together we will defeat Covid-19 thanks to Artificial Intelligence.

Receive updates from Neosperience:

Sentient Technology: feelings through sensors

tecnologia senziente

By Sentient Technology we want to highlight the applications of Artificial Intelligence that can read, interpret and respond to human stimuli. 

Man is an emotional animal; for this reason, humans  search for emotions within what they create .

In recent years, we have witnessed a wave of technology development  that seeks to imitate, or rather decrypt, human emotions.

A practical example to explain sentient technology is the case study of the Emotional Art Gallery, a Clear Channel Sweden project from 2019. 

The concept consisted of broadcasting some works by international artists on 250 digital totems inside the subway stations of the Swedish capital. The artworks were selected because they could reduce the stress level of passengers

For this project, developers created an algorithm that could recognize people's emotional state through the study of online and social analytics. Thanks to this "sentient" capacity of technology, users’ physical and psychological well-being improved.

Another example of Sentient Technology is the Ada project, an intelligent sculpture - made up of thousands of tiny LEDs - that Microsoft USA, with the collaboration of the architect and designer Jenny Sabin,  decided to create inside the Microsoft Research Building 99. 

For the project, cameras and sensors able to recognize people's emotions (for example by facial expression or tone of voice) were inserted inside the building. Ada can react to these stimuli through the continuous change of colors and patterns on its surface.

Over the years, sentient technology has also been applied to personal care. In a world where loneliness and depression are endemic, this has been proposed as a possibility to help solving the problem.

The examples are numerous, both for the support of young people and the care of the elderly. Interesting is the case of Lovot, a pet robot for every age produced by the Japanese company Groove X.

Designed to combat loneliness, Lovot can recognize emotions and interact in real-time with the stimuli it receives from the outside. Its surface is also soft and responsive to the touch.

Another interesting example, especially for its underlying software developed in Italy, is Zeno Robot. Behavior Labs, a Catania start-up engaged in the field of social robotics, had the brilliant idea of ​​using a robot, produced by an American company, to help children with autism to communicate and relate with the world around them.

Not all applications of sentient technology are related to the artistic or human cases, such as the two we have just mentioned.

In general, two different uses of this can be defined: one empathic and one analytical.

This technology was primarily born as the core feature of sentiment analysis platforms, used to recover significant insights about services and products, and to manage and recognize possible corporate crises.

Through natural language processing (NLP), enhanced - in the most advanced tools - by Machine Learning, these platforms can read in real-time thousands of posts on social media or web, recognizing the topic of conversation and, above all, the sentiment of the writer.

However, a simple grammatical error, a statement of context or a hint of sarcasm is enough to weaken the reliability of the analysis. The technology used is still limited and limiting due to the complexity of human language and the interpretation of emotions. 

Humanity suffers by nature from emotional illiteracy, especially in this digital and virtual age. We are unable to name the emotions we feel and to recognize the feelings of those around us; so how can we hope to teach an algorithm to be empathetic?

The sentient technology, if not used responsibly, risks becoming cynicism.

The following example can be interpreted in this way.

Not long ago, a Korean broadcaster streamed a show called Meeting You, telling the dramatic story of a mother who lost her seven-year-old.

During the transmission, the authors decided to recreate the 3D model of the daughter in a virtual environment.

The girl was built with the look, voice, movements and real feelings of the deceased girl. In the end, the mother was invited to play with her in this fictional world, to say goodbye one last time. 

A problem appears: a sentient technology that proposes itself as empathetic creates numerous questions from an ethical point of view.

How far can we go? We will find out over time.

Photo by Tyler Lastovich on Unsplash

Receive updates from Neosperience:

AI and Photography; beauty lies in the programmers’ eyes

Can AI and Photography work together?

A few weeks ago, in our article “Music and Artificial Intelligence”, we talked about how technology is modifying the music industry.

We stressed the non-dangerousness of this phenomenon, provided that the application of Artificial Intelligence is carried out with responsibility of all the actors involved.

This speech can also be translated into the relationship between AI and Photography, with the required differences.

There are two cases, opposite in terms of aesthetic and ethical quality, to be taken as an example of the role played by technology in the photography field: the "Dreams of New York" project and the development, in the Machine Learning world, of the GANs technique.

The first one is an artistic project created by Tanner Woodbury and Nikolos Killian, two American designers. While wandering in the streets of New York at the slow pace of Google Street View, they noticed the beauty of some sights of the city.

And that’s why they decided to carry out a photographic project, taking up and turning those “amateur” shots into black and white. They made an exhibition, with an artbook that quickly went sold-out.

The technological tool suddenly became an involuntary art photographer. The role of the American copyright legislation was crucial for the success of the project because, if it is a machine that takes the picture, then the intellectual property belongs to everyone.

On the other hand, GANs’ case was a whole other story.

The acronym stands for Generative Adversarial Networks and indicates a Machine Learning technology invented only in 2014.

Its processes are very simple; there are two neural networks, a generative one and a discriminative opponent. The first one has the task to take data and modify them.

The latter analyzes the results produced by its twin to check if the are within the truthfulness parameters set by the programmer are respected. 

Let's take a practical example: GANs has to analyze a database of thousands of people's faces. The generative neural network has the task to create the image of a completely new face, while the opposing neural network has to find out whether the image created by its companion is real or not. Each image is a battle between the two networks; one wins and the other one loses. The system obviously learns from the outcome of the process.

After a few years of testing, today there are GANs that are able to "imagine" and create faces that are so credible as to be unrecognizable both to the opposing network and to the human eye.

As for Deep Fake, the risk is to see these tools falling into the wrong hands, and perhaps damaging the community by creating non-existent people. Moreover, Copyright allows everyone to use the images of the GANs for their own purposes, precisely because they are created by machines and Artificial Intelligence, not by people.

A more philosophical question persists.

Photography is the instrument, maybe the most centered, to tell the story of humanity and reality. If you use it to generate something that does not exist, then a contradiction is created.

Broadly speaking, it is the same critical issue that emerged with the spread of Photoshop, but made more acute by the central role of the machine in the falsification process.

In this case, as well as in similar situations, the problem is not due to technology, but to those who hide behind it. In fact, the GANs had originally been conceived by its creator, Ian Goodfellow, to make large amounts of data available to small researchers and specialized centers, to make AI training more sustainable.

For example, the GANs can create, using a limited database of images, new original elements with which artificial intelligence can be trained, eliminating the cost of retrieving photographs.

Ergo: a tool for the democratization of technology and creativity.

GANs was then used in an extremely creative way, conceptually overcoming its natural purpose. For example, in unique artistic projects such as the "artificial" design of a painting that was sold at an auction for 432 thousand euros.

At the same time, some artists such as the English Anna Ridler, have used GANs in their works and performances; to be mentioned, the short film Fall of the House of Usher, in which design becomes plastic art conceived and composed by the machine.

If we wanted to define the different uses of the GANs, we should distinguish between two intents: creative and "astute". The mentality and objectives of those behind the computer, rather than behind the camera, decide the truthfulness and the ethics of the results. Photography is a science, and today, in the era of numbers, it is more evident than ever.

What will be the future of the relationship between AI and Photography?

Nobody knows, but we are ready to find out.

Receive updates from Neosperience:

What is Psychographics? An overview and the User Insight practical case

What is Psychographics?

It is the study of the individual based on his interests, personality, and habits. It is the natural evolution of profiling through socio-demographic, geographical and behavioral data.

Psychographics is not a recent field of study: as a branch of psychology, it was developed and applied to marketing and traditional research (focus groups, market research, etc.).

However, it was through digital technology that it developed its full potential.

By analyzing user behaviors on social media, E-Commerces and any "virtual" environment, Psychographics is now able to profile users in a way that was unthinkable just a few years ago.

Its goal is to understand individual characteristics such as emotions, values​​, and attitudes, as well as a whole other set of psychological factors.

All these data provide precious insights about the motivations behind people's behavior, for example, why they buy a specific product, or support a certain cause, or vote for a particular political candidate.

We all heard about the sadly known Cambridge Analytica accident. The researchers and marketers involved were able to boost numerous political campaigns thanks to illegally retrieved psychographic data from people's social profiles.

The method they used was to divide the subjects into five macro-clusters, based on whether they showed presented or not one of the following psychological traits, namely:

  • Openness: this trait indicates how open-minded a person is. A person with a high level of openness is curious, creative and open to change;
  • Consciousness: a person who shows a high level of consciousness is responsible, sets long-term goals and does not act impulsively;
  • Extroversion: the subjects characterized by this trait love to have fun with people and live in social environments. They are also enthusiastic, but often let themselves be guided by others. They also love being in the center of attention;
  • Agreeableness: a person with high levels of agreeableness is usually friendly, kind and diplomatic. He also shows optimism and tend to trust the others;
  • Emotional stability (or its negative counterpart, Neuroticism): a person with a high level of emotional stability who tends to easily experience positive emotions;

This model, which you can find outlined below, is known as OCEAN (the first letters of the psychographic categories), or BigFive.

How does this model apply to marketing?

Through Psychographics, it is possible to understand the fundamental individual characteristics of your customers, in order to collect useful guidelines on how to communicate and create one-to-one messages. 

Let's make an example. A company that works in the energy market needs to communicate a promotional offer to its public, but first it decides to cluster it with the OCEAN psychographic model.

Practical examples of psychographic profiling.

If the customer shows a strong affinity to the Openness cluster, he will receive a graphically creative banner that offers the possibility to customize the energy contract according to his needs.

Elseways, if the customer belongs to the Extroversion cluster, he will be told that the offer has been appreciated by many people, giving him the possibility to receive a discount if the customer brings a friend.

If the person belongs to the cluster of Conscientiousness, they will be given the opportunity, directly on the banner, to deepen the offer and discover its long-term advantages.

The possible customizations are infinite; will be the psychography expert, in concert with the creative, to find the best practical solutions to cover most of the target audience with the correct message.

Given the power that this method makes available to companies, the market has been subject to strict regulations. What Cambridge Analytica did just a few years ago would be impossible to accomplish today. In recent years, alternative tools have been developed, fully compliant with GDPR, which allows companies to acquire the same type of information and to use them - this time - for the benefit of people.

This is why Neosperience has created User Insight

User Insight is a tool that uses the latest Artificial Intelligence, Machine Learning and Advanced Analytics technologies to allow companies to learn about the psychographic traits of customers, thanks to the analysis of their browsing behaviors.

Watch the video and find out how User Insight can help you increase conversions from 20% to 50%.

In a market where the personalization of the offer has become the key of success of commercial proposals, understanding the needs and desires of each customer in full respect of its privacy becomes an essential factor.

The future belongs to those who will be able to use new technologies to constantly improve customer experience, progressively reducing the "gaps" between physical and digital worlds. At Neosperience, we believe that this can be possible, and we work to give substance to a technology that allows companies to be more and more empathic and closer to their customers.

Photo by Markus Spiske on Unsplash


Receive updates from Neosperience:

Music and Artificial Intelligence. Please don’t shoot the piano player


Artificial Intelligence is becoming increasingly widespread, even in unexpected areas. Until a few years ago it was thought that its use would be limited to industrial production, repetitive tasks and, in general, jobs that do not enrich the human spirit. Today this assumption is no longer valid. Now AI is also an artist.

Painting, sculpture, poetry, photography, cinema; there is no artistic field in which Artificial Intelligence has not been applied at least once, often with surprising results.

The musical field is most involved in this revolution of creativity, probably because music, after all, is an art that lives on mathematics and physics, therefore predisposed to the influence of algorithms, codes, and data.

The latest album publications, soundtracks and songs by artists or companies that have used some tools based on Artificial Intelligence have terrorized the music market. According to the experts, today the sector risks a profound revolution (if not destruction). But is it so? In other words, is it right or not to shoot the..."artificial" piano player? 


How does the application of artificial intelligence to music work?

We can simply say that, in order to learn, AI is fed thousands and thousands of songs through neural networks (mathematical models that imitate biological neural networks) that work through machine learning, and in particular through deep learning (a sub-category of ML that is also able to infer meanings without human supervision). These pieces are fragmented and studied, and the machine manages to extract the basic information and can recognize the patterns it can use to create original works, similar to those that any artist could compose.


Everything depends on the use made of it, and how it sounds…

If the learning process is similar for any system based on machine learning, there are however two different applications of AI for music: Flow Machines by Sony and Magenta by Google, for example, are placed at two extremes.

The first is not a creative Artificial Intelligence, or at least not in the sense in which we assume the term; it merely facilitates the artist's work, allowing the person to free their creativity, stimulating it with suggestions and ideas based on their preferences and attitudes.

Magenta, on the other hand, is a true artificial composer that, depending on the inputs provided to it, independently manages to create an original track. The quality of the composition is still not pleasing from many points of view, but technological innovation is growing exponentially and so are its results.

These are not the only two tools available at the moment; among others, we can mention AIVA, MuseNet of OpenAI, Amper and Jukedeck. Everyone is specialized in some features and functionalities. What they have in common is the fact that they have attracted the attention of media and investors.

If we also consider the recommendation algorithms of streaming platforms like Spotify or Apple Music, or all the applications of AI in the field of editing tools, it is clear that the penetration of this technology in the musical field is more advanced than we might believe.


But what are the possible consequences of a macro-spread?

At least in the short term, there should be no substantial change in the way we listen to or choose our music.

Some "artificial" songs and albums, like "I AM AI" by Taryn Southern, sung by the performer but composed, played and produced by the open-source software Amper, will continue to come out and will surely get a good commercial success, but they will be exceptions, and probably they will be appreciated for their innovativeness and not for their intrinsic quality.

Over time, however, things will change. A sign of this evolution is Jukedeck's acquisition, which we mentioned earlier as one of the best intelligent music composition tools, by TikTok, one of the most successful social networks of the last period and especially loved by the new generations.

Imagine what could come of this marriage. Maybe we will have the opportunity, once registered on that social network, to create our song, helped by an evolved AI, and to sing it and share it with friends. 

This way, it would be possible to break down the barrier, impassable for most of people, of learning a musical instrument.


Every subscriber could become a singer, a musician, and maybe a music influencer.

This story is the fruit of our imagination, no matter how beautiful or frightening it may be. Things are undoubtedly changing, and music is facing many transformations stimulated by technological innovation (augmented reality concerts, artists who are no longer alive returning to sing in the form of holograms, bitcoins to buy songs and albums directly from singers...and so on).

Ultimately, to answer the question that we asked ourselves in the beginning: is it right or not to shoot the "artificial" piano player? 

Well, there is one thing that is always true: blocking innovation is counterproductive. The goal is to be able to guide it on the right path, to allow a gentle transformation for artists and experts and not damage anyone.

Artificial intelligence is born as a tool to enable or facilitate human activities. In this case, if we know how to use it properly, it could stimulate people's creativity, finally giving shape to art for everyone.

Photo by bady qb on Unsplash

Receive updates from Neosperience:

RF-Pose: a motion capture technology that sees beyond the walls



Compared to other animal species, our senses are not particularly developed. They have slowly dimmed thanks to our mental and intellectual development over the last 10,000 years. But what is wonderful about our continuous evolution is the insatiable desire to enhance the senses and abilities we already possess, returning to primitive abilities, or borrowing new ones from other animal species.

Among the five senses, the best we have to offer is undoubtedly sight. There is a reason if we give such a great symbolic value to our eyes. Now we can see in the dark or perceive infrared and thermal characteristics, as some animals can already do. The next evolution, according to many, will be the ability to see through objects.

As we already know, some animals, such as bats, use highly developed radars to move, allowing them to see their prey even at great distances and through trees and fronds.

Humans have used radio frequencies for about a century, but only in recent years we have been able to create a portable and sufficiently precise device to recognize a person or an object through the walls using this technology. Nowadays, the cost of the radar device and the lack of precision represent the biggest problem, but the latest developments in machine learning and artificial intelligence are progressing significantly.


The MIT project

About a year ago, MIT published the results of a study on the use of low-frequency radio waves for people movements’ recognition through walls. The AI developed on a deep neural network, called RF-Pose is capable of creating 2D models.

Initially, they added a camera to the wireless device, to help it during the first phases. Thanks to visual recognition, the "radar" was able to find a correlation between radio signals and people's images.

When the RF-Pose began to work independently, the researchers noted that, surprisingly, it was able to perceive people through the walls, with a drop in accuracy of only 10%. It is surprising that the precision of the results between the visual system and the one based on radio waves, is comparable.

Another skill that the RF-Pose can claim is the ability to recognize the single person, thanks to his physical characteristics and movements, with an accuracy of 80%.


Privacy compliance 

One of the most interesting aspects of this tool is the possibility to track people's movements without affecting their privacy. The use of radio frequencies, in fact, allows only to collect each person’s silhouette, not to identify them through their facial features or individual characteristics. 

This way, it is possible to develop commercial applications of RF-Pose that are totally safe and compliant with the privacy normative, which is becoming more and more restrictive.


The practical outcomes

The possible practical outcomes of this technology are endless, also thanks to the fact that the system has a low development cost and is easy to use. 

MIT researchers say that initially it will be tested in the medical field, to recognize or follow the development of certain diseases, such as Parkinson's and ALS, through micro signals and physical characteristics. At the same time, it is planned to use it in rest homes or private homes of individuals with mobility problems, as a safety device that, in the event of a fall or risky situation, can alert the healthcare professional.

Other possible areas of use are gaming, in which the recognition of the player's movements would no longer depend on a video camera; security and robotics, with the enhancement of movement capabilities and mapping of the interiors, just like Apple and Google are already doing with their Indoor Maps Program and IndoorMaps, developing different technologies that make use of personal devices' Wi-Fi and physical layout of the buildings.


Neosperience’s solution for physical retailers 

Over the years, at Neosperience we developed a solution for Customer Experience in physical retail that is based on an innovative and revolutionary technology for this field.

People Analytics, in fact, uses cameras positioned in the store equipped with our AI tools, to recognize customers' movements and to highlight places of greater or lesser interest thanks to a heat map. A system of this type makes it possible to become aware of very interesting insights and to notice problems or strengths in the disposition of the products or the performance of the employees.

Obviously, this technology could be enhanced with the MIT RF-Pose to overcome the store's environmental limitations, which a camera is not able to do.


Another application 

Another possible application is the implementation of the RF-Pose system in the online and offline poster advertising. 

Some advertising agencies, such as Grandi Stazioni Retail, are already using totems capable of reading the public's facial expressions through cameras and, more in general, of keeping track of the number of people passing by, besides calculating their position in space.

However, this system is very expensive and hard to apply anywhere. A tool based on the RF-Pose system, instead, could be able to count the passers-by and their position. It would be decidedly less demanding and could also be applied to traditional signage without significant implementation costs. 

Furthermore, it would be possible to use the same system to send notifications to the customers who pass in front of the ad, in the case the customer gave the authorization to receive them.

Finally, RF-Pose is a system that promises to improve our way of understanding user movements and activities, always bearing in mind that privacy is the most important thing to protect.


Photo by Joe Yates on Unsplash

Receive updates from Neosperience:

Crisis Management: how the AI can take part on it successfully

In a progressively dynamic, technological and globalized world, companies may encounter more and more potential or real crises. Just think about cyberattacks: in recent years they have multiplied in every field, putting sensitive data and IT systems at serious risk. 

It’s become essential to be able to foresee and solve the problems affecting your brand reputation. If you develop the right skills within your company and you provide yourself the expertise, you’ll be ready to deal with every situation.

On the other hand, if even small problems affect your work, then your business risks to go belly up if an internal or external event occurs.

Recalling the admonition of Ian Mitroff, perhaps the most influential crisis management expert: "You don't have to ask if a critical event will happen but when, where and with what consequences".

Deloitte’s innovative research

In 2018 Deloitte conducted an innovative research on the perception of managers on crisis management. The results were surprising.

What the examiners noticed was a managers' strong self-confidence, as they often think their company will be ready to face unexpected and dangerous events, whereas many of them do not have empirical data to confirm their convictions.

For example, 88% of respondents said they could cope with a corporate scandal, while only 17% had ever experienced it personally or during a simulation.

That's the point. When experiencing a crisis directly, managers' perception changes considerably. 

The research showed that among those who had experienced a dramatic business situation in the two previous years, the need to invest in prevention and training was considerably higher than the priorities highlighted by the colleagues who hadn’t faced a crisis yet.

So what’s the right thing to do?

Getting ready for your business in advance. First of all, it is necessary to draw up a list of possible problems that the company may face. Framing a consistent risk assessment is the first step towards not being caught unprepared. 

Subsequently, a task force should be appointed and organized. It is quite common to involve the subjects that deal with public relations, the highest management of the company, which will have to be the subjects media and institutions will interact with. Moreover, the legal department will have to unravel and explain potential issues within the law.

It is also essential to plan crisis simulations based on the risk assessment previously prepared. Experience is the only useful tool to deal with a crisis in the best possible way, but it is also the magnifying glass on corporate weaknesses.

It says that a person shows his true nature when he’s in danger. The same happens with companies. We must never underestimate the power of simulation for the growth and consciousness of employees and managers.

What are the existing tools for crisis management?

One to be mentioned is In Case of Crisis by RockDove Solutions, an App available for company executives. This device promises to help companies to deal with crises using operational protocols, intra-App messaging systems, customized notifications and activity reports.

However, the real question is: what more can we do?

It is interesting to start from the well-known definition that Timothy Coombs, Associate Professor in Communication Studies at Eastern Illinois University, made of corporate crisis. 

"A crisis is the perception of a not predictable event that endangers the expectations of stakeholders, and that can seriously compromise the operational capacity of an organization with negative consequences".

We will try to identify which solutions, based on AI, would make this definition obsolete.

AI-based solutions 

Let's start with the unpredictability.

We have already seen that the possibility of forecasting risky situations grows - considerably - when managers are properly trained and equipped with the right tools. Now, imagine that we can implement within the tool an AI able to help those responsible for assessing risks, recognizing operations and company size, its geographical position and external macro factors that could influence processes.

Once this has been done, the app's own AI could develop ad hoc training programs for each manager, imagining plausible situations and implications, even on a probabilistic basis, and independently linking questions to best practices and behaviours to be adopted.

Moreover, it could simulate a real crisis, involving all managers at the same time, measuring reaction times and effectiveness of choices and operations, relying on crisis already solved.

Concerning the losses of the company's operational capabilities, its benefits would be to facilitate communication between every member of the task force and to keep crisis parameters under control.

To be more specific, when a crisis is undoubtedly in progress, those who are aware of it could send an alert to all the other managers, with the most crucial data provided by the system.

Specific functions

Furthermore, Artificial Intelligence could keep track of company-related keywords on the Internet and social media, like any other web listening tool, to keep the crisis evolution under control. It is also possible to monitor the work of customer service and task force members in one place.

This way, the risks of seeing business operations compromised would be significantly reduced, thanks to these new technological opportunities. Besides, the AI would learn from its own and others' mistakes, continuously improving and limiting the unfavorable consequences of the crisis.

In conclusion, the possibilities for further improvement exist and must be pursued. Predicting a crisis and limiting its damage is an issue that concerns the life and work of thousands and thousands of workers and citizens. Artificial Intelligence precisely serves this purpose: to help humans live better and more safely.

Receive updates from Neosperience:

The dark side of tech’s ethics

Mano che regge una lampada circolare

Our judgment, like a pendulum, continuously swings between optimism and pessimism. This inclination is self-evident when we discuss the technological developments, occurred in the last decades, that have modified our way of living. 

In 1964, Umberto Eco published Apocalypse Postponed, an essay that was meant to put in order the different judgments expressed on mass society. Eco tried to find a correct and rational middle ground between those who were enthusiastic about cultural innovations and those who loathed them. As an old catchphrase says, "in medio stat virtus".


The current situation

The same arguments could be shifted to our troubled years, where two opposite parties are fighting over different topics such as social networks, privacy, personal relationships, online hate, irresponsibility, and so on. Those who have faith in the birth of a just world and those who predict the end of our existence. As the pendulum above, we feel different emotions about the future of technology.

Recently, media have spread some news about the racist, discriminating and insensitive behavior of Artificial Intelligence's applications. This is usually a matter of social network management, recruitment procedures, predictive policing. 

Well, there is no wonder; technology is not neutral.

Technology is created by humans for humans and carries within it all the prejudices and personal histories of those who develop it. It clearly appears in applications where technology has a voice and relates directly to its creators.


Microsoft's Tay bot

In 2016, Microsoft released on Twitter its most advanced bot: Tay, to improve its conversational skills in the social network. In less than 24 hours, Tay started using an offensive and racist language, forcing Microsoft to shut it down. 

The causes of this media disaster were soon discovered: during that short time, the Artificial Intelligence, which wasn't given an understanding and limitation of misbehavior, learned from Twitter users to use inappropriate language.


Youtube's moderation system 

Another example to be mentioned for its pervasive presence in our lives is the social networks' moderation system. As we all know, in 90% of cases, an Artificial Intelligence that is trained to recognize inappropriate contents will control users' posts. Well, it is not uncommon that users have been the target of discriminatory censorship performed by the moderation system.

It is interesting to mention the episode involving YouTube, which has penalized, economically and publicly, the LGBTQ themed contents of numerous creators. In this case, the system was not able to distinguish between sexually explicit themes and videos that show the authors' sexual and gender orientations.

Many cases could be mentioned as examples, and many others that have not received media visibility and, thus, remain unclosed. 


OpenAI and university courses

However, in recent years, many subjects have understood the importance of this topic. OpenAI , a non-profit company that sees Elon Musk and Bill Gates among its investors, has set itself the goal of creating a free and secure Artificial Intelligence, to improve the life of all humanity, without discrimination.

Many universities, on the other hand, began to develop, within their training offers, examinations and specializations concerning ethics in Artificial Intelligence. Harvard, Stanford and the Massachusetts Institute of Technology among others. All the most important pools of talent in the technology field have finally understood the importance of teaching their students this kind of technology, which is not neutral and must be conceived according to our conscience.

Ultimately, there is only one keystone. Machines don't care about our future; our wellness depends solely on people who develop them.


Photo by Nadine Shaabana on Unsplash

Receive updates from Neosperience:

Visual Marketing – Everything You Need To Engage Customers


Take a look at the advertising and marketing trends of the last four, five years. You can immediately recognize that there is a common trait: visual content. Today, in fact, people prefer visual contents, rather than texts.

As a result, social networks have become primarily visual oriented - i.e. stories and 360 videos - with optical technologies, such as VR and AR, used as marketing tools to create a full, immersive experience for customers.

Why do people prefer visual content? Also, how can companies choose the right image to deliver valuable content and increase customer engagement?


We are meant to process and respond to visual content better than words: it’s in our DNA. In fact, 50% to 80% of our brain is dedicated to visual processing - colors, shapes, visual memory, patterns, spatial awareness, and image recollection. This tendency leads to an innate preference for images, illustrations, videos, and colors.

Also, today’s customers want to receive information quickly and without huge efforts; thus, they are more likely to consume visual content, which is processed 60000 times faster than text. What about the information we retain from experiences? We actually remember 20% of what we read and 80% of what we see.

It doesn’t mean that text is not important anymore. An extensive textual content, in fact, can provide a level of completeness that is incomparable, and sometimes you cannot use a simple image to explain complicated concepts. Combining the two elements, text and images, however, you can reach the best results.


With the enormous amount of content and information running each and every day, companies need to do everything they can to differentiate. Using visual elements is much more effective than text only because - as we have just seen - they can capture customers’ attention.

Moreover, the use of relevant and compelling visuals generate more engagement, as it make website visitors stay longer on page, consume more content, and understand the messages you are trying to deliver.

The use of high-quality magnets such as infographics or canvas can also bring a lot of relevant inbound links, boosting your ranking in search results, and increasing brand relevance. It’s been proved that customers make decisions based on what they remember. Thus, leveraging on the most critical driver of customers’ choices - memory - visuals ultimately increase the chances to be recognized.


According to iScrabblers, a real photo produces better results than a stock photo (35% more), and employees and customer testimonials generate engagement respectively in terms of viewing time and conversion rate.

Also, colors capture attention, increase recall, comprehension, and brand recognition. Not to mention how they can influence human emotions: certain colors or color combinations generate particular feelings and affect the way people (and customers) make decisions.

To achieve better results, consider putting more efforts on creating original contents and matching colors with the emotions you want to resonate with your message.


Marketers often struggle when it comes to producing engaging visual content on a consistent basis. As a result, more and more companies adopt online tools or software to facilitate the process of producing such contents and enhance their performances.

However, this might not be enough: the fact that your image is beautifully crafted doesn’t mean that it is also effective. Every time you grab the audience’s attention but they don’t recall your brand or product, you are losing a chance to convert and monetize.

So, how can you find the perfect image for your blog post, advertising, or product presentation? You can rely on AI tools such as Image Memorability, which can answer this specific need, revealing the memorability score of images or advertisements before they are published, to predict the effectiveness of your visual marketing.


Photo by Tony Webster on Unsplash

Receive updates from Neosperience: