AI and Photography; beauty lies in the programmers’ eyes

Can AI and Photography work together?

A few weeks ago, in our article “Music and Artificial Intelligence”, we talked about how technology is modifying the music industry.

We stressed the non-dangerousness of this phenomenon, provided that the application of Artificial Intelligence is carried out with responsibility of all the actors involved.

This speech can also be translated into the relationship between AI and Photography, with the required differences.

There are two cases, opposite in terms of aesthetic and ethical quality, to be taken as an example of the role played by technology in the photography field: the "Dreams of New York" project and the development, in the Machine Learning world, of the GANs technique.

The first one is an artistic project created by Tanner Woodbury and Nikolos Killian, two American designers. While wandering in the streets of New York at the slow pace of Google Street View, they noticed the beauty of some sights of the city.

And that’s why they decided to carry out a photographic project, taking up and turning those “amateur” shots into black and white. They made an exhibition, with an artbook that quickly went sold-out.

The technological tool suddenly became an involuntary art photographer. The role of the American copyright legislation was crucial for the success of the project because, if it is a machine that takes the picture, then the intellectual property belongs to everyone.

On the other hand, GANs’ case was a whole other story.

The acronym stands for Generative Adversarial Networks and indicates a Machine Learning technology invented only in 2014.

Its processes are very simple; there are two neural networks, a generative one and a discriminative opponent. The first one has the task to take data and modify them.

The latter analyzes the results produced by its twin to check if the are within the truthfulness parameters set by the programmer are respected. 

Let's take a practical example: GANs has to analyze a database of thousands of people's faces. The generative neural network has the task to create the image of a completely new face, while the opposing neural network has to find out whether the image created by its companion is real or not. Each image is a battle between the two networks; one wins and the other one loses. The system obviously learns from the outcome of the process.

After a few years of testing, today there are GANs that are able to "imagine" and create faces that are so credible as to be unrecognizable both to the opposing network and to the human eye.

As for Deep Fake, the risk is to see these tools falling into the wrong hands, and perhaps damaging the community by creating non-existent people. Moreover, Copyright allows everyone to use the images of the GANs for their own purposes, precisely because they are created by machines, not by people.

A more philosophical question persists.

Photography is the instrument, maybe the most centered, to tell the story of humanity and reality. If you use it to generate something that does not exist, then a contradiction is created.

Broadly speaking, it is the same critical issue that emerged with the spread of Photoshop, but made more acute by the central role of the machine in the falsification process.

In this case, as well as in similar situations, the problem is not due to technology, but to those who hide behind it. In fact, the GANs had originally been conceived by its creator, Ian Goodfellow, to make large amounts of data available to small researchers and specialized centers, to make AI training more sustainable.

For example, the GANs can create, using a limited database of images, new original elements with which artificial intelligence can be trained, eliminating the cost of retrieving photographs.

Ergo: a tool for the democratization of technology and creativity.

GANs was then used in an extremely creative way, conceptually overcoming its natural purpose. For example, in unique artistic projects such as the "artificial" design of a painting that was sold at an auction for 432 thousand euros.

At the same time, some artists such as the English Anna Ridler, have used GANs in their works and performances; to be mentioned, the short film Fall of the House of Usher, in which design becomes plastic art conceived and composed by the machine.

If we wanted to define the different uses of the GANs, we should distinguish between two intents: creative and "astute". The mentality and objectives of those behind the computer, rather than behind the camera, decide the truthfulness and the ethics of the results. Photography is a science, and today, in the era of numbers, it is more evident than ever.

Photo by Rayan Almuslem on Unsplash

Do NOT follow this link or you will be banned from the site!

What is Psychographics? An overview and the User Insight practical case

What is Psychographics?

It is the study of the individual based on his interests, personality, and habits. It is the natural evolution of profiling through socio-demographic, geographical and behavioral data.

Psychographics is not a recent field of study: as a branch of psychology, it was developed and applied to marketing and traditional research (focus groups, market research, etc.).

However, it was through digital technology that it developed its full potential.

By analyzing user behaviors on social media, E-Commerces and any "virtual" environment, Psychographics is now able to profile users in a way that was unthinkable just a few years ago.

Its goal is to understand individual characteristics such as emotions, values​​, and attitudes, as well as a whole other set of psychological factors.

All these data provide precious insights about the motivations behind people's behavior, for example, why they buy a specific product, or support a certain cause, or vote for a particular political candidate.

We all heard about the sadly known Cambridge Analytica accident. The researchers and marketers involved were able to boost numerous political campaigns thanks to illegally retrieved psychographic data from people's social profiles.

The method they used was to divide the subjects into five macro-clusters, based on whether they showed presented or not one of the following psychological traits, namely:

Openness: this trait indicates how open-minded a person is. A person with a high level of openness is curious, creative and open to change.

Consciousness: a person who shows a high level of consciousness is responsible, sets long-term goals and does not act impulsively.

Extroversion: the subjects characterized by this trait love to have fun with people and live in social environments. They are also enthusiastic, but often let themselves be guided by others. They also love being in the center of attention.

Agreeableness: a person with high levels of agreeableness is usually friendly, kind and diplomatic. He also shows optimism and tend to trust the others. 

Emotional stability (or its negative counterpart, Neuroticism): a person with a high level of emotional stability who tends to easily experience positive emotions.

This model, which you can find outlined below, is known as OCEAN (the first letters of the psychographic categories), or BigFive.

How does this model apply to marketing?

Through Psychographics, it is possible to understand the fundamental individual characteristics of your customers, in order to collect useful guidelines on how to communicate and create one-to-one messages. 

Let's make an example. A company that works in the energy market needs to communicate a promotional offer to its public, but first it decides to cluster it with the OCEAN psychographic model.

How would individual communication change?

If the customer shows a strong affinity to the Openness cluster, he will receive a graphically creative banner that offers the possibility to customize the energy contract according to his needs.

Elseways, if the customer belongs to the Extroversion cluster, he will be told that the offer has been appreciated by many people, giving him the possibility to receive a discount if the customer brings a friend.

Given the power that this method makes available to companies, the market has been subject to strict regulations. What Cambridge Analytica did just a few years ago would be impossible to accomplish today. In recent years, alternative tools have been developed, fully compliant with GDPR, which allows companies to acquire the same type of information and to use them - this time - for the benefit of people.

This is why Neosperience has created User Insight, a tool that uses the latest Artificial Intelligence, Machine Learning and Advanced Analytics technologies to allow companies to learn about the psychographic traits of customers, thanks to the analysis of their browsing behaviors.

In a market where the personalization of the offer has become the key of success of commercial proposals, understanding the needs and desires of each customer in full respect of its privacy becomes an essential factor.

The future belongs to those who will be able to use new technologies to constantly improve customer experience, progressively reducing the "gaps" between physical and digital worlds. At Neosperience, we believe that this can be possible, and we work to give substance to a technology that allows companies to be more and more empathic and closer to their customers.

Photo by Markus Spiske on Unsplash

 

Music and Artificial Intelligence. Please don’t shoot the piano player

 

Artificial Intelligence is becoming increasingly widespread, even in unexpected areas. Until a few years ago it was thought that its use would be limited to industrial production, repetitive tasks and, in general, jobs that do not enrich the human spirit. Today this assumption is no longer valid. Now AI is also an artist.

Painting, sculpture, poetry, photography, cinema; there is no artistic field in which Artificial Intelligence has not been applied at least once, often with surprising results.

The musical field is most involved in this revolution of creativity, probably because music, after all, is an art that lives on mathematics and physics, therefore predisposed to the influence of algorithms, codes, and data.

The latest album publications, soundtracks and songs by artists or companies that have used some tools based on Artificial Intelligence have terrorized the music market. According to the experts, today the sector risks a profound revolution (if not destruction). But is it so? In other words, is it right or not to shoot the..."artificial" piano player? 

 

How does the application of artificial intelligence to music work?

We can simply say that, in order to learn, AI is fed thousands and thousands of songs through neural networks (mathematical models that imitate biological neural networks) that work through machine learning, and in particular through deep learning (a sub-category of ML that is also able to infer meanings without human supervision). These pieces are fragmented and studied, and the machine manages to extract the basic information and can recognize the patterns it can use to create original works, similar to those that any artist could compose.

 

Everything depends on the use made of it, and how it sounds…

If the learning process is similar for any system based on machine learning, there are however two different applications of AI for music: Flow Machines by Sony and Magenta by Google, for example, are placed at two extremes.

The first is not a creative Artificial Intelligence, or at least not in the sense in which we assume the term; it merely facilitates the artist's work, allowing the person to free their creativity, stimulating it with suggestions and ideas based on their preferences and attitudes.

Magenta, on the other hand, is a true artificial composer that, depending on the inputs provided to it, independently manages to create an original track. The quality of the composition is still not pleasing from many points of view, but technological innovation is growing exponentially and so are its results.

These are not the only two tools available at the moment; among others, we can mention AIVA, MuseNet of OpenAI, Amper and Jukedeck. Everyone is specialized in some features and functionalities. What they have in common is the fact that they have attracted the attention of media and investors.

If we also consider the recommendation algorithms of streaming platforms like Spotify or Apple Music, or all the applications of AI in the field of editing tools, it is clear that the penetration of this technology in the musical field is more advanced than we might believe.

 

But what are the possible consequences of a macro-spread?

At least in the short term, there should be no substantial change in the way we listen to or choose our music.

Some "artificial" songs and albums, like "I AM AI" by Taryn Southern, sung by the performer but composed, played and produced by the open-source software Amper, will continue to come out and will surely get a good commercial success, but they will be exceptions, and probably they will be appreciated for their innovativeness and not for their intrinsic quality.

Over time, however, things will change. A sign of this evolution is Jukedeck's acquisition, which we mentioned earlier as one of the best intelligent music composition tools, by TikTok, one of the most successful social networks of the last period and especially loved by the new generations.

Imagine what could come of this marriage. Maybe we will have the opportunity, once registered on that social network, to create our song, helped by an evolved AI, and to sing it and share it with friends. 

This way, it would be possible to break down the barrier, impassable for most of people, of learning a musical instrument.

 

Every subscriber could become a singer, a musician, and maybe a music influencer.

This story is the fruit of our imagination, no matter how beautiful or frightening it may be. Things are undoubtedly changing, and music is facing many transformations stimulated by technological innovation (augmented reality concerts, artists who are no longer alive returning to sing in the form of holograms, bitcoins to buy songs and albums directly from singers...and so on).

Ultimately, to answer the question that we asked ourselves in the beginning: is it right or not to shoot the "artificial" piano player? 

Well, there is one thing that is always true: blocking innovation is counterproductive. The goal is to be able to guide it on the right path, to allow a gentle transformation for artists and experts and not damage anyone.

Artificial intelligence is born as a tool to enable or facilitate human activities. In this case, if we know how to use it properly, it could stimulate people's creativity, finally giving shape to art for everyone.

Photo by bady qb on Unsplash

RF-Pose: a motion capture technology that sees beyond the walls

Buildings

 

Compared to other animal species, our senses are not particularly developed. They have slowly dimmed thanks to our mental and intellectual development over the last 10,000 years. But what is wonderful about our continuous evolution is the insatiable desire to enhance the senses and abilities we already possess, returning to primitive abilities, or borrowing new ones from other animal species.

Among the five senses, the best we have to offer is undoubtedly sight. There is a reason if we give such a great symbolic value to our eyes. Now we can see in the dark or perceive infrared and thermal characteristics, as some animals can already do. The next evolution, according to many, will be the ability to see through objects.

As we already know, some animals, such as bats, use highly developed radars to move, allowing them to see their prey even at great distances and through trees and fronds.

Humans have used radio frequencies for about a century, but only in recent years we have been able to create a portable and sufficiently precise device to recognize a person or an object through the walls using this technology. Nowadays, the cost of the radar device and the lack of precision represent the biggest problem, but the latest developments in machine learning and artificial intelligence are progressing significantly.

 

The MIT project

About a year ago, MIT published the results of a study on the use of low-frequency radio waves for people movements’ recognition through walls. The AI developed on a deep neural network, called RF-Pose is capable of creating 2D models.

Initially, they added a camera to the wireless device, to help it during the first phases. Thanks to visual recognition, the "radar" was able to find a correlation between radio signals and people's images.

When the RF-Pose began to work independently, the researchers noted that, surprisingly, it was able to perceive people through the walls, with a drop in accuracy of only 10%. It is surprising that the precision of the results between the visual system and the one based on radio waves, is comparable.

Another skill that the RF-Pose can claim is the ability to recognize the single person, thanks to his physical characteristics and movements, with an accuracy of 80%.

 

Privacy compliance 

One of the most interesting aspects of this tool is the possibility to track people's movements without affecting their privacy. The use of radio frequencies, in fact, allows only to collect each person’s silhouette, not to identify them through their facial features or individual characteristics. 

This way, it is possible to develop commercial applications of RF-Pose that are totally safe and compliant with the privacy normative, which is becoming more and more restrictive.

 

The practical outcomes

The possible practical outcomes of this technology are endless, also thanks to the fact that the system has a low development cost and is easy to use. 

MIT researchers say that initially it will be tested in the medical field, to recognize or follow the development of certain diseases, such as Parkinson's and ALS, through micro signals and physical characteristics. At the same time, it is planned to use it in rest homes or private homes of individuals with mobility problems, as a safety device that, in the event of a fall or risky situation, can alert the healthcare professional.

Other possible areas of use are gaming, in which the recognition of the player's movements would no longer depend on a video camera; security and robotics, with the enhancement of movement capabilities and mapping of the interiors, just like Apple and Google are already doing with their Indoor Maps Program and IndoorMaps, developing different technologies that make use of personal devices' Wi-Fi and physical layout of the buildings.

 

Neosperience’s solution for physical retailers 

Over the years, at Neosperience we developed a solution for Customer Experience in physical retail that is based on an innovative and revolutionary technology for this field.

People Analytics, in fact, uses cameras positioned in the store equipped with our AI tools, to recognize customers' movements and to highlight places of greater or lesser interest thanks to a heat map. A system of this type makes it possible to become aware of very interesting insights and to notice problems or strengths in the disposition of the products or the performance of the employees.

Obviously, this technology could be enhanced with the MIT RF-Pose to overcome the store's environmental limitations, which a camera is not able to do.

 

Another application 

Another possible application is the implementation of the RF-Pose system in the online and offline poster advertising. 

Some advertising agencies, such as Grandi Stazioni Retail, are already using totems capable of reading the public's facial expressions through cameras and, more in general, of keeping track of the number of people passing by, besides calculating their position in space.

However, this system is very expensive and hard to apply anywhere. A tool based on the RF-Pose system, instead, could be able to count the passers-by and their position. It would be decidedly less demanding and could also be applied to traditional signage without significant implementation costs. 

Furthermore, it would be possible to use the same system to send notifications to the customers who pass in front of the ad, in the case the customer gave the authorization to receive them.

Finally, RF-Pose is a system that promises to improve our way of understanding user movements and activities, always bearing in mind that privacy is the most important thing to protect.

 

Photo by Joe Yates on Unsplash

Crisis Management: how the AI can take part on it successfully

In a progressively dynamic, technological and globalized world, companies may encounter more and more potential or real crises. Just think about cyberattacks: in recent years they have multiplied in every field, putting sensitive data and IT systems at serious risk. 

It’s become essential to be able to foresee and solve the problems affecting your brand reputation. If you develop the right skills within your company and you provide yourself the expertise, you’ll be ready to deal with every situation.

On the other hand, if even small problems affect your work, then your business risks to go belly up if an internal or external event occurs.

Recalling the admonition of Ian Mitroff, perhaps the most influential crisis management expert: "You don't have to ask if a critical event will happen but when, where and with what consequences".

Deloitte’s innovative research

In 2018 Deloitte conducted an innovative research on the perception of managers on crisis management. The results were surprising.

What the examiners noticed was a managers' strong self-confidence, as they often think their company will be ready to face unexpected and dangerous events, whereas many of them do not have empirical data to confirm their convictions.

For example, 88% of respondents said they could cope with a corporate scandal, while only 17% had ever experienced it personally or during a simulation.

That's the point. When experiencing a crisis directly, managers' perception changes considerably. 

The research showed that among those who had experienced a dramatic business situation in the two previous years, the need to invest in prevention and training was considerably higher than the priorities highlighted by the colleagues who hadn’t faced a crisis yet.

So what’s the right thing to do?

Getting ready for your business in advance. First of all, it is necessary to draw up a list of possible problems that the company may face. Framing a consistent risk assessment is the first step towards not being caught unprepared. 

Subsequently, a task force should be appointed and organized. It is quite common to involve the subjects that deal with public relations, the highest management of the company, which will have to be the subjects media and institutions will interact with. Moreover, the legal department will have to unravel and explain potential issues within the law.

It is also essential to plan crisis simulations based on the risk assessment previously prepared. Experience is the only useful tool to deal with a crisis in the best possible way, but it is also the magnifying glass on corporate weaknesses.

It says that a person shows his true nature when he’s in danger. The same happens with companies. We must never underestimate the power of simulation for the growth and consciousness of employees and managers.

What are the existing tools for crisis management?

One to be mentioned is In Case of Crisis by RockDove Solutions, an App available for company executives. This device promises to help companies to deal with crises using operational protocols, intra-App messaging systems, customized notifications and activity reports.

However, the real question is: what more can we do?

It is interesting to start from the well-known definition that Timothy Coombs, Associate Professor in Communication Studies at Eastern Illinois University, made of corporate crisis. 

"A crisis is the perception of a not predictable event that endangers the expectations of stakeholders, and that can seriously compromise the operational capacity of an organization with negative consequences".

We will try to identify which solutions, based on AI, would make this definition obsolete.

AI-based solutions 

Let's start with the unpredictability.

We have already seen that the possibility of forecasting risky situations grows - considerably - when managers are properly trained and equipped with the right tools. Now, imagine that we can implement within the tool an AI able to help those responsible for assessing risks, recognizing operations and company size, its geographical position and external macro factors that could influence processes.

Once this has been done, the app's own AI could develop ad hoc training programs for each manager, imagining plausible situations and implications, even on a probabilistic basis, and independently linking questions to best practices and behaviours to be adopted.

Moreover, it could simulate a real crisis, involving all managers at the same time, measuring reaction times and effectiveness of choices and operations, relying on crisis already solved.

Concerning the losses of the company's operational capabilities, its benefits would be to facilitate communication between every member of the task force and to keep crisis parameters under control.

To be more specific, when a crisis is undoubtedly in progress, those who are aware of it could send an alert to all the other managers, with the most crucial data provided by the system.

Specific functions

Furthermore, Artificial Intelligence could keep track of company-related keywords on the Internet and social media, like any other web listening tool, to keep the crisis evolution under control. It is also possible to monitor the work of customer service and task force members in one place.

This way, the risks of seeing business operations compromised would be significantly reduced, thanks to these new technological opportunities. Besides, the AI would learn from its own and others' mistakes, continuously improving and limiting the unfavorable consequences of the crisis.

In conclusion, the possibilities for further improvement exist and must be pursued. Predicting a crisis and limiting its damage is an issue that concerns the life and work of thousands and thousands of workers and citizens. Artificial Intelligence precisely serves this purpose: to help humans live better and more safely.

The dark side of tech’s ethics

Mano che regge una lampada circolare

Our judgment, like a pendulum, continuously swings between optimism and pessimism. This inclination is self-evident when we discuss the technological developments, occurred in the last decades, that have modified our way of living. 

In 1964, Umberto Eco published Apocalypse Postponed, an essay that was meant to put in order the different judgments expressed on mass society. Eco tried to find a correct and rational middle ground between those who were enthusiastic about cultural innovations and those who loathed them. As an old catchphrase says, "in medio stat virtus".

 

The current situation

The same arguments could be shifted to our troubled years, where two opposite parties are fighting over different topics such as social networks, privacy, personal relationships, online hate, irresponsibility, and so on. Those who have faith in the birth of a just world and those who predict the end of our existence. As the pendulum above, we feel different emotions about the future of technology.

Recently, media have spread some news about the racist, discriminating and insensitive behavior of Artificial Intelligence's applications. This is usually a matter of social network management, recruitment procedures, predictive policing. 

Well, there is no wonder; technology is not neutral.

Technology is created by humans for humans and carries within it all the prejudices and personal histories of those who develop it. It clearly appears in applications where technology has a voice and relates directly to its creators.

 

Microsoft's Tay bot

In 2016, Microsoft released on Twitter its most advanced bot: Tay, to improve its conversational skills in the social network. In less than 24 hours, Tay started using an offensive and racist language, forcing Microsoft to shut it down. 

The causes of this media disaster were soon discovered: during that short time, the Artificial Intelligence, which wasn't given an understanding and limitation of misbehavior, learned from Twitter users to use inappropriate language.

 

Youtube's moderation system 

Another example to be mentioned for its pervasive presence in our lives is the social networks' moderation system. As we all know, in 90% of cases, an Artificial Intelligence that is trained to recognize inappropriate contents will control users' posts. Well, it is not uncommon that users have been the target of discriminatory censorship performed by the moderation system.

It is interesting to mention the episode involving YouTube, which has penalized, economically and publicly, the LGBTQ themed contents of numerous creators. In this case, the system was not able to distinguish between sexually explicit themes and videos that show the authors' sexual and gender orientations.

Many cases could be mentioned as examples, and many others that have not received media visibility and, thus, remain unclosed. 

 

OpenAI and university courses

However, in recent years, many subjects have understood the importance of this topic. OpenAI , a non-profit company that sees Elon Musk and Bill Gates among its investors, has set itself the goal of creating a free and secure Artificial Intelligence, to improve the life of all humanity, without discrimination.

Many universities, on the other hand, began to develop, within their training offers, examinations and specializations concerning ethics in Artificial Intelligence. Harvard, Stanford and the Massachusetts Institute of Technology among others. All the most important pools of talent in the technology field have finally understood the importance of teaching their students this kind of technology, which is not neutral and must be conceived according to our conscience.

Ultimately, there is only one keystone. Machines don't care about our future; our wellness depends solely on people who develop them.

 

Photo by Nadine Shaabana on Unsplash

Visual Marketing – Everything You Need To Engage Customers

memorability-visuals

Take a look at the advertising and marketing trends of the last four, five years. You can immediately recognize that there is a common trait: visual content. Today, in fact, people prefer visual contents, rather than texts.

As a result, social networks have become primarily visual oriented - i.e. stories and 360 videos - with optical technologies, such as VR and AR, used as marketing tools to create a full, immersive experience for customers.

Why do people prefer visual content? Also, how can companies choose the right image to deliver valuable content and increase customer engagement?

WHY DO PEOPLE PREFER VISUAL CONTENT?

We are meant to process and respond to visual content better than words: it’s in our DNA. In fact, 50% to 80% of our brain is dedicated to visual processing - colors, shapes, visual memory, patterns, spatial awareness, and image recollection. This tendency leads to an innate preference for images, illustrations, videos, and colors.

Also, today’s customers want to receive information quickly and without huge efforts; thus, they are more likely to consume visual content, which is processed 60000 times faster than text. What about the information we retain from experiences? We actually remember 20% of what we read and 80% of what we see.

It doesn’t mean that text is not important anymore. An extensive textual content, in fact, can provide a level of completeness that is incomparable, and sometimes you cannot use a simple image to explain complicated concepts. Combining the two elements, text and images, however, you can reach the best results.

HOW CAN VISUALS IMPROVE MARKETING RESULTS?

With the enormous amount of content and information running each and every day, companies need to do everything they can to differentiate. Using visual elements is much more effective than text only because - as we have just seen - they can capture customers’ attention.

Moreover, the use of relevant and compelling visuals generate more engagement, as it make website visitors stay longer on page, consume more content, and understand the messages you are trying to deliver.

The use of high-quality magnets such as infographics or canvas can also bring a lot of relevant inbound links, boosting your ranking in search results, and increasing brand relevance. It’s been proved that customers make decisions based on what they remember. Thus, leveraging on the most critical driver of customers’ choices - memory - visuals ultimately increase the chances to be recognized.

WHAT TYPE OF VISUAL CONTENT WORKS BEST?

According to iScrabblers, a real photo produces better results than a stock photo (35% more), and employees and customer testimonials generate engagement respectively in terms of viewing time and conversion rate.

Also, colors capture attention, increase recall, comprehension, and brand recognition. Not to mention how they can influence human emotions: certain colors or color combinations generate particular feelings and affect the way people (and customers) make decisions.

To achieve better results, consider putting more efforts on creating original contents and matching colors with the emotions you want to resonate with your message.

HOW CAN YOU FIND THE PERFECT IMAGE?

Marketers often struggle when it comes to producing engaging visual content on a consistent basis. As a result, more and more companies adopt online tools or software to facilitate the process of producing such contents and enhance their performances.

However, this might not be enough: the fact that your image is beautifully crafted doesn’t mean that it is also effective. Every time you grab the audience’s attention but they don’t recall your brand or product, you are losing a chance to convert and monetize.

So, how can you find the perfect image for your blog post, advertising, or product presentation? You can rely on AI tools such as Image Memorability, which can answer this specific need, revealing the memorability score of images or advertisements before they are published, to predict the effectiveness of your visual marketing.

 

Photo by Tony Webster on Unsplash

Are Your Product Images Really Effective? Ask the AI.

image_memorability

What makes people buy?

Among all the questions that marketers have always been trying to answer, this is undoubtedly the most important. But the most complex, at the same time.

Marketing research has paved the way to get closer to the answer, narrowing the field to questions like "What makes a memorable advertisement?", "What makes people remember your brand and your product?". Scientific studies have established that most consumer decisions are memory based. Thus, marketers continually look for ways to make people remember their brand and products, working through memory with their ads, messages, and tv commercials.

But memory alone is not enough.

Probably many of you remember the famous tv commercial Fiat launched in 2002. The one with the catchphrase «Buonasera…». That was a great campaign, which had gone immediately viral, but it had a problem: everyone remembered the spot, but no one remembered the brand (many, not even the automotive sector).

Your ad has failed if it’s so boring that nobody notices or remembers it. But it has failed too if it’s hilarious and exciting, but nobody's able to recall your brand.

For promotional images, it's the same thing. The fact that your image is impactful doesn't mean it is effective.

For instance, let's look at this image.

memorability

If I told you that the sales target here is the pair of shoes, would you say it is an effective image?

Now, whatever your opinion, it will certainly be different from that of many other people, regardless of whether they are advertising experts or not.

The reality, in fact, is that only technology can give a clear answer to the question.

Let’s see why.

memorability_map

Applying our AI model, based on deep learning algorithms, we discovered that this promotional image is quite memorable. The memorability score is 0.834, which means that - according to the calculation logic of the model - 60% of people will remember it about 30 days after first viewing.

Furthermore, as you can see from the heat maps, the objects that are positively correlated with memorability are the white sweater on the upper left and the pink garment on the right. They are responsible for activating people's memory, unlike the other objects in the image. In other words, they would be what makes people buy.

As a result, this image is not particularly effective. Although it is easy enough to remember, what remains in people’s mind is not the sales target, but other surrounding objects.

Now think about the images you have used in your recent campaigns. Are you sure they were really the best option you had? How can you avoid using images that are not memorable and are likely to make your strategy less effective? Discover Image Memorability and learn more about your images.

Digital Innovation In Retail – Towards An Empathic Customer Experience

alejandro-alvarez-150148-unsplash

What will the future bring for leading brands in the retail and fashion industry?

With the rise of e-commerce giants like Amazon, Alibaba, and eBay, the retail scenario is rebuilt on a digital foundation, where competition is played on the ability to meet an entirely new set of behaviors, expectations, and priorities of today's shoppers.

On-demand services and instant gratification available at any time are giving customers an ever greater control over their purchase journey and increasing their power towards brands. Speed, ease, contextual and individual relevance have passed within a few years from being valuable nice-to-have to essential must-have.

However, few are really trying to bridge the gap between insight and action, and it's the case of leading companies that are using technology to innovate their customer experience with a human-centric approach, changing how they interact and engage with today's customers.

Timberland launched a context-aware email marketing campaign, shaping ads for different weatherproof products to match each user's position and weather conditions in real-time.

Since at least 2013, Amazon takes notice of our shopping behavior and tailors recommendations for every one of us. And as we continue browsing, the fitting personalization goes on.

Even customer support has become much smarter. On companies' websites and e-commerce, chatbots and virtual assistants use natural language processing to help customers effortlessly navigate questions, FAQs or troubleshooting.

In the offline world, we see stores and shop windows coming to life with digital signage interactive systems and 3D contents on augmented and virtual reality. And even behind the scenes, store analytics is becoming a common practice that will soon have nothing on online analytics, helping retailers to better understand shoppers behavior and measure the impact of different areas in the store environment.

It is easy to see how all these applications have one thing in common: AI.

Artificial intelligence is disrupting the retail industry as it enables marketers to automate and bring on a large scale something that until a few years ago required effortful small-scale processes. That is tailor-made experiences, custom-designed for each individual.

But there is still something wrong with AI today. A missing piece to move from the now outdated customer-centric approach to a people-centric path, more consistent with the evolving needs and wants of today's shoppers. It is predicted to be the future of AI, that will progressively bridge the gap between the offline and the online world. That missing piece is empathy.

We have identified 10 key factors for an empathic customer experience. You can find them in the "Digital Innovation in Retail & Fashion" report, now free to download.

 

Schermata 2018-09-17 alle 15.40.21

Photo by Alejandro Alvarez on Unsplash

FullSizeRender1

May 24 and 25, 2018, Amsterdam, The Netherlands: the future of technology was there, at the TNW Conference 2018, the award-winning 2-day European festival dedicated to innovation, marketing, communication, and creativity.

With 19 tracks of content, a huge variety of topics was covered: Artificial Intelligence, Machine Learning, and Deep Learning changing companies' businesses; Design thinking transforming our work and helping us solving complex problems: new Marketplaces growing retailers' e-commerce exponentially; Virtual and Augmented Reality making physical and digital objects coexist simultaneously; and many others.

In this wide range of specialties, what are the key insights for the digital experience leaders? Here are the three main trends we have observed.

Artificial Intelligence will turn into Emotional Intelligence
Opening the 'Machine: Learners' track, Cassie Kozyrkov, Chief Decision Scientist at Google, shares her thoughts on the decision intelligence engineering, the emerging discipline that focuses on using ML and AI to improve companies’ businesses.

In a statement, she has captured the attention of the entire audience: 2030 will be the age of emotional intelligence. The Human-AI symbiosis that will take place in the next years will shape the way brands connect with customers across all digital and physical touchpoints, making their relationship closer, personal and intimate.

That will become possible thanks to the ability for Machine Learning and Deep Learning to foster and advance brands' social skills, enabling them to change their communication style depending on what customers’ emotions and reactions are.

If the customer is in a hurry and impatient, or anxious and stressed out, brands will be ready to deliver a different experience than if s/he's calm and relaxed; just like a good seller does when dealing with customers in the store.

Context-aware Artificial Intelligence unlocks the power of Customer Experience
In a world where customer expectations are constantly evolving, 89% of companies believe that customer experience will be their primary basis for competition (Gartner, 2015). That is how Adrian McDermott, President of Products at Zendesk, started what has been one of the most eye-opening speeches of the event.

Artificial Intelligence solutions can help companies to increase customer satisfaction by providing:

- Automation, which removes repetitive work - think of an answer bot instead of a customer service professional).
- Recommendation, that uses content cues to inform decisions customers make - by offering, for example, the right information and help at the right moment.
- Prediction, able to spot trends that humans can’t see - such the expected customer satisfaction, the probability that a customer will become loyal to your brand, or that s/he will recommend your product to others.

Over the coming years, these three AI-based levers will allow leading companies to:

- Embrace a people-first approach, which means, capturing the customer behind the analytics and beyond purely objective data such as demographics.
- Adopt a growth mindset, by figuring out what their customer segments look like and A/B testing what kind of interactions they should activate across those segments.
- Deliver seamless omnichannel experiences and context-based conversations with customers, to close the gap with customers' habits and make them live comprehensive shopping experiences.

Digital communication will move to dialogue
By 2020, the average person will have more conversation with their bot than with their spouse (Gartner, 2016). What is certain is that, within the next few years, having a bot in your app and website will go from being an optional nice-to-have to an essential must-have.

If misdesigned, however, you’ll have a frustrating user interface that will drive your customers away, explains Purna Virji, Senior Manager of Global Engagement at Microsoft. Convinced that we can do much better than state of the art, she reveals us the key principles of designing conversational AI; those that she calls the "4 C's":

A. Clarity.
Mind your language, create a conversational flow and see what sounds natural. To avoid "robotic" perceptions, write for the ear and not for the eye, as the right words to create engagement and trust are not those beautiful to read but those that are nice to hear.

B. Character.
People prefer a virtual agent with an easy-to-perceive personality: it can be warm, formal, or even funny ... For example, if a customer says “thank you” at the end of a conversation, a professional bot will reply “you’re welcome,” while a more empathic bot can answer “you bet!”, and a very friendly one can say “no prob.”

But be careful: do not fall into the trap of turning the bot into a fake human. The goal isn’t for the customer to think they’re talking to a real person, so it’s best if the bot is easy to get to know, with a specific personality, but still clearly a bot.

C. Compassion.
Stepping into your customers’ shoes and making your user interface better understand and resonate with them is probably the most struggling point for today's bots. Think, for example, of their common reactions to small talk.

Even though encountering small talk is pretty common for a bot, that's where conversation often breaks. Quite simply, if a customer says "tks" instead of "thanks" it is pretty common to see the bot reply "Sorry. I do not understand”. Thus, building small talk scenarios becomes essential to avoid the embarrassing “Sorry I don’t understand.”

D. Correction.
There are lots of ways to correct an error without having to say "Sorry." One possible strategy, which also promotes sales, is to offer alternatives: if a customer asks for ordering red tulips, but these are unfortunately out of stock, instead of saying "Sorry, we are out of stock of red tulips" the bot can reply "We’re out of red tulips, would you like yellow or orange tulips instead?". After all, is that not what a good seller would do?

To conclude, this year's edition of the TNW Conference has given us significant insights that we can bring to the Digital Customer Experience environment. If “the world is machine readable,” as stated by Kevin Kelly, Co-founder of WIRED, during his compelling speech, we can add that it should be the same for customers, and for the way they think, feel and behave towards brands.

But - citing McDermott's words - “Oil has no value as you can’t extract energy from it. The same is for data. They have no value as you can’t extract knowledge from them.

That is why companies need to learn how to use Artificial Intelligence solutions to understand who their customers truly are, and thus build better products and experiences, designed for humans.

Download The 7 Pillars Of The New Customer Loyalty to define the foundations on which to build your engagement and loyalty strategy, create innovative experiences and establish a lasting and valuable relationship with your customers.