Recent Posts



Stranger Danger - “Using Distributed Ledgers to Define Trust for Conversational Bots”

Trust = “firm belief in the reliability, truth, ability, or strength of someone or something”[1]


Is it possible to develop a model that provides users with the ability evaluate and trust a conversational smart bot before engaging with them?

Can Distributed Ledger technology be utilized to develop a data driven trust model that[MQ1] is rooted in the amount of transactions that the bot has completed?

To start, let’s clarify some of the terminology in this document and describe the role that each one plays within the transaction.

  • Commercial Organization – Any organization and/or brand that currently utilizes Automated Assistants or Conversational Interfaces to interact with the consumer either for support, knowledge transfer, commercial or banking transactions.

  • Users – An individual or a surrogated assistant that interacts with a conversational bot for a commercial interaction – that may be a transaction, information or other activities.

  • Conversational bot, Conversational Interfaces or Automated Assistants[2] – An automatic tool, usually with some Artificial Intelligence (AI) and Natural language (NL) capabilities to perform an action and interact with the end users in a conversational manner.

  • Platform – This is the ecosystem where the conversational bot operates from – for example there are Alexa bots (Amazon) or the ones that work within Messenger (Facebook) or Slack and many others.

  • Distributed Ledgers (DLT)[3] - A distributed ledger is a database that is consensually shared and synchronized across network spread across multiple sites, institutions or geographies

  • Blockchain - A digital ledger in which transactions made in bitcoin or another cryptocurrency are recorded chronologically and publicly

  • Trust Score – A number representing the aggregate number of transactions in a ledger. Both the ledger and trust score are immutable.

The Problem:

The unmitigated growth of bots and other conversational automated tools that are infused with AI and NL continues to increase and with it so does the risk that such interactions can become malicious in nature. The intimate nature and simplicity of the conversational interfaces make this a security issue that can be exploited for; behavior driven phishing, brand misrepresentation, transactional attacks and misguiding of funds based on misrepresentation. The results of such negative interactions are numerous; including the negative impact on the affected brands, breaks down the trust relationship between the user and the technology, and lastly it opens up another security attack vector where a further attacks can be perpetrated.

An example of the risk of such interactions and the impact on a brand is made by John Tolbert.[4] in his article about “Fake Tech Support Spiders”.

So how does a user determine that an automated assistant is trustworthy?

Let’s use the example of a consumer entering a restaurant for the first time to – they are entering the restaurant based on a few things;

  1. they have a need;

  2. the fact that the restaurant has a physical presence creates a perception of validation and trust; and

  3. the ability to evaluate the trustworthiness of the restaurant by its product displays, brands and the amount of people eating or had a meal at the restaurant, provides a level of comfort. That it is a restaurant and it provides food.

I believe that by using a distributed ledger model ; we can replicate such activities within the cyber world.

Here is the same example, interacting with a conversational bot.

  1. the user has a need;

  2. they reach out to a recognizable brand; and

  3. today, the user has no ability to know how many (if any) transactions the bot has performed? What if – the user has the ability to know how many “validated” transactions the bot has performed? They can now make an informed decision before interacting with it.

DLT provides a mechanism to assess the trustworthiness of a bot because at its core, it is a transactional ledger where all parties validate the transaction before entering into a public ledger. This validation by independent parties, coming together for a given transaction provides a strong level of reliability, as they all must “vote” that the transaction was performed, before it is entered into a public ledger. Otherwise it is not entered into the ledger. Being independent parties, they can not be influenced or persuaded to validate the transaction by any individual party, making the DLT a mechanism that can provide the transactional information regarding bot activity and delivering a basic level of trust that the bot is performing the activities that it represents.

To clarify, this ledger is not to be confused for a feedback rating system, as the objective is to establish trustworthiness based on real interaction, which cannot be influenced or altered by outside parties. The DLT is focused on capturing how many transactions have been performed and validated by the members of the DLT, with the goal being to capture immutable data regarding the activity of the bot, which is validated by the members of the bot’s ecosystem.