AI & Big Data Congress
TUATARA shares its innovations during AI & Big Data Congress
8 minLEFT

TUATARA shares its innovations during AI & Big Data Congress

Innovation is in our DNA. During the AI & Big Data Congress, we had an opportunity to prove it by presenting our idea for data monetization and a pioneering solution that guarantees compliance with the GDPR provisions.

AI & Big Data Congress

AI & Big Data Congress is a unique and prestigious event attracting the most important representatives of business, government administration, media and science. During this edition (March 12-13th), we had a chance to share two innovations created by TUATARA.

Combining data from dispersed sources – the fourth data monetization model

Real-time marketing and data monetization based on combining bank and telecommunications operator’s data – was the subject of the first speech. An inspiration, a case study and an announcement of a breakthrough cross-industry marriage in the field of data monetization were presented by our Chief Innovation Officer, Krzysztof Goworek.

He spoke of TASIL – a data monetization platform implemented in Oman and operating in a plug & play model, used for real-time marketing by hundreds of Omani companies.

Krzysztof Goworek

The vision of the platform development

Platform development is based on extending the ecosystem with companies from other industries, including banks. It will allow the third model that TASIL operates in to be exchanged with the fourth monetization model.

Case study

A campaign, implemented for one of the telecommunications operators, using TASIL and retargeting in social media, served as a use case example. The campaign aimed at transferring the largest possible number of clients from analogue channels to digital ones. This two-month campaign has obtained results that probably don’t need any comment – over 20% conversion rate, 120% increase in revenue from digital channels and over 3000% return on investment.

To learn more about the TASIL platform head to

A step further

A combination of telecommunications operator and bank’s data opens up a range of new opportunities and huge business potential for marketing and customer service. In the end, it also significantly increases the company’s revenues. A company with a 10 million opt-in database can earn over $167 million in additional revenue within 5 years. However, when realising the fourth model of data monetization, based on the synergy of data from various industries, each of the platform’s partners can earn another $50 million. Provisions on personal data protection made the integration of data from dispersed sources impossible. How to face GDPR while implementing the fourth model of data monetization? Krzysztof Goworek announced the creation of a new platform that will enable forming a broad open banking-led ecosystem for data monetization, thanks to the PSD2 directive.

See the presentation.

Artificial Intelligence as a guarantee for safety of sensitive data

Tomasz Rzeźniczak, Head of Data Science SensID at TUATARA, spoke about the use of AI as a condition for obeying the law in the digital society, presenting sensID – a product that ensures the security of sensitive data in compliance with GDPR.

Tomasz Rzeźniczak

Digital society

Digitization is a fact. Traditionally understood national borders are disappearing and the distance ceases to matter due to the Internet, development of mobile devices and convergence of media. Information-based society, that benefits from a consistent digital market, is a goal of the European Commission. A global network that allows sharing any information and expressing opinions on any subject is both a value and a threat. Anonymity in the network is often abused. In order to build a secure digital society, the European Commission has created a set of legal norms that should be respected and followed. They refer to, e.g., digital privacy secured by the GDPR regulations or to the ePrivacy Directive, to copyrights or to the European strategy for a safer and better Internet for children.

AI – a blessing or a problem?

It seems that a digital society could not exist and function properly without the support of Artificial Intelligence. However, AI, being a driving force of the digital society, also creates a field for potential abuse.

With Artificial Intelligence used as a tool for recognizing patterns and predictions, partly controlled by a human being, violation of norms is largely dependent on a human being. Using data reflecting systemic disparities in society, it is easy to perpetuate stereotypes or exacerbate inequalities. For instance, predictions regarding teenage pregnancies based on purchasing history may carry a bias. The decision to grant a mortgage or assess the propensity of a person to future criminal activity may be discriminatory and unfair. Google, the global giant in the digital world, has been accused of a series of frauds. Among others, discrimination against women exemplified by the more frequent display of well-paid work for male users or favouring its own stores in search results or displaying ads of arrest website much more often to black people were others.

AI as an autonomous decision-making tool can generate numerous abuses and raise many questions. Making autonomous decisions means operating beyond the guidelines provided by programmers. Thus, the systems can influence our preferences and make decisions without human participation. How to make sure that the systems will not engage in unethical activities? And, if they happen to violate the law, who should be responsible for it – the owner, the user, the designer, the producer or maybe a computer? At what level should the law be enforced since the smart instruments make a high level of supervision very easy to achieve? What constitutes the major guidelines for the systems in case the law and ethics don’t follow the same rules? For example, should cars without a driver be programmed to comply with the law, while the driver takes control over them? Similar questions are multiplying.

Supervisory tools based on AI

It should be assumed that the AI system ought to be subject to the full range of rights that apply to its human operator. It also should have a clear obligation to inform users that they are not dealing with a human being but, e.g., a bot. The AI system should not retain or disclose confidential information without the explicit permission of the source of this information.

People aren’t able to properly monitor very complex and constantly changing digital environments on their own – in terms of compliance with law and ethics. In addition, they cannot guarantee that smart instruments will comply with the law. Legal order means not only law enforcement but also a preventive law, such as routine control of companies, the positioning of speed cameras and hiring of customs officials. The supervisory system should be adapted to its recipients. In the case of a digital society, compliance with the law is conditioned by the emergence of new instruments rather than the establishment of new regulations. What is needed, is a new type of law enforcement program based on artificial intelligence that will be responsible for:

  • auditing – confirmation of correct behaviour
  • monitoring – alerting about the lack of compliance
  • enforcement – enforcing lawful proceedings
  • ethics built into bots – informing about ethical standards

The creation of such surveillance tools, however, poses technical challenges. Diversified data formats (structural and unstructured data), natural language recordings as well as audio and video formats, distribution of information about people between systems, large data volumes, and finally, AI-based systems all cause many problems.

Using Big Data and AI, one must take into account the difficult code traceability, the greater complexity of the algorithm prediction base, as well as much lower system transparency, which will require such human actions as auditing machine learning algorithms against bias, preventing search results which are unfair and discriminatory, preventing situations in which systems featured with the possibility of continuous learning would copy unacceptable behaviours.

SensID – security of sensitive data in compliance with GDPR

TUATARA’s product presented during the speech may be a solution to the issues related to the GDPR regulations, with regard to the safe processing of personal data. SensID is an innovative set of specialized tools and services for managing personal data, including:

  • automatic inventory of personal data
  • building a consistent identity register based on multiple data sources
  • legal consents management
  • data anonymisation

It uses advanced technologies, including Machine Learning, NLP and probabilistic techniques to detect sensitive data and combine personal data from various sources into a single record.

It enables continuous monitoring and ensures legal compliance with the GDPR, thanks to the manipulation of meta-data without saving or copying personal and sensitive data from sources in sensID repositories. It operates on structural, semi-structural and non-structural data, such as files, system logs, corporate mail, etc.

The solution has been designed and built for Polish personal data. It is equipped with a ready lexicon and processing rules for the Polish language and data formats used in Poland, including specific identifiers, such as identity document numbers, medical, legal and financial document IDs, etc. The solution enables creating a similar model for any other language.

See the presentation.  

Are you interested in our solution? Visit our website to learn more about sensID.

Liked it?