1. 程式人生 > >How to Use the Cognitive Service from Microsoft (LUIS) with Bot Builder

How to Use the Cognitive Service from Microsoft (LUIS) with Bot Builder

How to Use the Cognitive Service from Microsoft (LUIS) with Bot Builder

by SERHIY SKOROMETS, Software Developer at ElifTech

Conversational chatbots were created to help businesses with client communications, answer questions and take orders. To perform these tasks successfully, they need to be intelligent. In other words, good at Natural Language Processing (NLP). But the thing is, it’s pretty time-consuming to build your own NLP. Luckily, you don’t always need to start from scratch. At ElifTech’s Cool Projects Department, we used Bot Builder together with a cognitive service from Microsoft (LUIS) to make a simple AI chatbot. Here’s how cool our “ToDo” bot is.

Want to know how we made it? Keep on reading to find out.

Top 3 Most Popular Bot Design Articles:

Tools you’ll need to build our chatbot

We used lots of tools to build our chatbot. We wrote it on Node.js v8, using BotBuilder as a framework. We also used Dotenv to load environment variables from a .env file. Also, we used

HTTP-status-codes — a simple module to transform HTTP status codes into constants — coupled with Keymirror. As for the database, a MongoDB server with Mongoist came in handy. Mongoist is a great MongoDB module with async/await functionality. To manage REST APIs for listening to incoming messages, we used
Restify
. And to make the bot speak on the Ubuntu platform, we used Say.js. (Actually, we had to use it because currently Bot Emulator has some issues with Text To Speech (TTS) synthesis. So, we temporary applied a simple Say.js module that uses Festival for speech synthesis in local testing.)

Chatbot initialization

Now, let’s go step-by-step through chatbot initialization. First things first: the main file (app.js) requires the necessary modules:

require('dotenv').config();const builder = require('botbuilder');
import restify from 'restify';
//our custom bot init module where we place bot dialogs.import { botCreate } from './bot.js';

After that, we initialized the server in app.js.

let server = restify.createServer();
server.listen(3978, () => {   console.log(`${server.name} listening: ${server.url}`);});

If you want to use Bot Service to connect a bot to other channels, you’ll need to register your bot first.

Create chat connector to use the Bot Framework service. Don’t forget to set your MicrosoftAppId and MicrosoftAppPassword in .env:

let connector = new builder.ChatConnector({   appId: process.env.MicrosoftAppId,   appPassword: process.env.MicrosoftAppPassword});
// connector usage in bot creation methodbotCreate(connector);

Use REST API to listen to the messages from users:

server.post('/api/messages', connector.listen());

In the “botCreate” method, we connected our chatbot to Microsoft Azure Bot Service and initialized all dialogs.

function botCreate(connector) {   let inMemoryStorage = new builder.MemoryBotStorage();
   let bot = new builder.UniversalBot(connector,[   	function (session) {      	session.send('Hey there. I am ToDo-bot and ready to learn!:)')   	}   ]).set('storage', inMemoryStorage);}

Wham! We’ve completed a basic chatbot setup. Now our chatbot already listens to us and answers back. If you want to test a chatbot locally, use Bot Framework Emulator. Just set endpoint URL to http://localhost:3978/api/messages. After that, you can talk to your chatbot for the first time. Exciting, isn’t it? But for now, the chatbot has just one default answer: “Hey there. I am ToDo-bot and ready to learn!:)”

For real-world tasks, you have to initiate dialogs with BotBuilder SDK. By using the platform, you define when to start a dialog and what your chatbot should say.

Dialog creation

To build a dialog flow with BotBuilder SDK, just follow Microsoft’s simple and informative instructions. This is an example of a greeting:

bot.dialog(intents.Greeting, [   (session, args, next) => {   	// if bot allready knows* user name - pass to next dialog flow step,      where bot sends back greeting. Else - prompt user for name.   	session.userData.userName ? next() : builder.Prompts.text(session, messages.askName);   },   (session, results) => {   	const userName = results.response ? (session.userData.userName = results.response, results.response) : session.userData.userName;
       session.send(messages.getBotGreetingMessage(userName));   	session.endDialog();     	}])   .triggerAction({   	matches: intents.Greeting   });

This is a simple greeting dialog flow. The chatbot checks whether users have provided their names and asks them if needed. Then it greets the user.

But here’s an interesting part of the code where LUIS comes up:

.triggerAction({   	matches: intents.Greeting // dialog triggered from intend received from LUIS app.   });

This means that the current dialog was triggered by the intent received from the LUIS app. It’s sent after the user enters “Hello,” “Hi there” or whatever you taught the LUIS app to classify as a greeting. You know what that means — a computer “understanding” regular speech. The event isn’t triggered by some strict command or a regExp but by a normal, dare we say human, sentence or word.

How to create and apply a LUIS app

It’s time to get serious. Here’s how you can apply Microsoft’s LUIS. First, you need to create a LUIS app. You’ll get a LUIS endpoint in a public section and save it in a .env file as:

LUIS_MODEL_URL=<here-goes-luis-app-endpoint-url>

We will use this URL to connect the bot recognizer to the LUIS app:

//  bot.js, botCreate method:
function botCreate(connector) {   let inMemoryStorage = new builder.MemoryBotStorage();
   let bot = new builder.UniversalBot(connector,[   	function (session) {      	session.send('Hey there. I am ToDo-bot and ready to learn!:)')   	}   ]).set('storage', inMemoryStorage);
   //connect to LUIS   let recognizer = new builder.LuisRecognizer(process.env.LUIS_MODEL_URL);
   bot.recognizer(recognizer);   ....... All dialogs defined here...
};

Now, when the bot receives a text message, it uses the LUIS app to recognize the user’s intent from the text and trigger an appropriate dialog.

Now, for the most important part: training the LUIS app model. Here’s how the LUIS app works: it receives a new sentence and sends back a JSON. This JSON contains all the data the app was able to extract from the user’s sentence. It determines the intent with the highest score and uses it to call a custom method. The JSON contains top scoring intents and entities extracted from it:

{  "query": "create a new task",  "topScoringIntent": {	"intent": "AddTask",	"score": 0.984749258  },  "entities": []}

}The purpose of our method is starting a dialog and making a user save an item to the database.

Here’s a tricky thing: when a user types a really long sentence such as:

“Add item — test entity extraction after a very very long sentence has been fed to my smart model,” LUIS sends back a part of the task name as an entity.

{
  "query": "Add item  - test entity extraction after very very long sentence has been fed to my smart model",
  "topScoringIntent": {
  	"intent": "AddTask",
  	"score": 0.9868549
  },
  "intents": [
  	{
  	"intent": "AddTask",
  	"score": 0.9868549
  	}  ],
  "entities": [
  	{
  	"entity": "test entity extrac", //missing full task name!!!!
  	"type": "taskName",
  	"startIndex": 12,
  	"endIndex": 94,
  	"score": 0.381245255
  	}
  ]
}

As you can see, it extracted only a part of the sentence. Luckily, we can solve this issue by using short and long utterances for training. To create a new intent:

  1. Go to the created LUIS app.
  2. In your app, find the Intents and Entities lists.
  3. Go to the Intents list and click the “Create new intent” button.
  4. Type in a new Intent name.

We recommend creating five example utterances for each intent. It’s important to create different utterances to help the LUIS machine learning algorithm work with different texts. When you type in utterances for the current intent, you can mark Entities in these utterances. It works if you’ve previously created an entity in the Entities menu, e.g., “taskName.” Just click on a part of the utterance to mark the entity and choose the name for it from the Entities list. Alternatively, you can import a JSON with app data. You can find examples of doing that in the Microsoft BotBuilder repo.

Basically, each time a user types some sentence, the LUIS app sends back intents. The chatbot’s actions are triggered based on these intents. As a result, the actions should lead to completing the task expressed in the original sentence.

Take this as an example:

1.User types a sentence:

- Add some item.

2. The LUIS app extracts intent “AddTask” from the user’s sentence and this triggers the dialog:

bot.dialog(intents.askForTaskName, [
   (session, args, next) => {
   	if (args) {
       	let intent = args.intent;
       	let entity = builder.EntityRecognizer.findEntity(intent.entities, 'taskName');
       	if (entity && entity.type == entities.taskName) {
           	next({response: entity.entity})
       	} else {
           	botSayInFestival({
               	message: 'please provide a task name',
               	expectingInput: true,
               	session: session,
               	callback: ()=> {
                   	builder.Prompts.text(session, messages.getWhatNewTaskName());
               	}
           	});
       	}
   	} else {
       	botSayInFestival({
           	message: 'please provide a task name',
           	expectingInput: true,
   	        session: session,
           	callback: ()=> {
                   builder.Prompts.text(session, messages.getWhatNewTaskName());
           	}
       	});
   	}
   },
   async (session, results, next) => {
   	let todo = {
       	title: results.response,
       	userId: session.message.user.id,
       	isRemoved: false,
       	isDone: false
   	};
   	try {
       	const result = await addTask(todo);
           session.send(messages.getSavedTask(results.response));
       	botSayInFestival({message: 'task saved', callback: next});
   	} catch (e) {
       	console.error(e);
       	botSayInFestival({message: 'Failed to save task'});
           session.send(messages.getCannotSaveTask(results.response));
       	session.endDialog();
   	}
   },
   (session) => {
   	const userName = session.userData.userName;
   	botSayInFestival({message: 'Wanna add more tasks?', expectingInput: true, session: session});
   	builder.Prompts.confirm(session, messages.getWantAddMore(userName));
   },
   (session, results) => {
   	if (results.response) {
           session.beginDialog(intents.askForTaskName);
   	} else {
           session.send(messages.getNoProblem());
       	botSayInFestival({
           	message: messages.getNoProblem(), expectingInput: true, session: session, callback: ()=> {
               	session.endDialog();
           	}
       	});
   	}
   }
]).triggerAction({
   matches: intents.AddTask
});

3. Data is saved to the database during the dialog flow.

4. The user receives a response from the chatbot.

Also, to make our AI chatbot more human-like, we applied the Say.js library which uses TTS (text to speech) synthesis. Like we’ve mentioned earlier, this helps users hear the chatbot’s responses on the Ubuntu platform.

Note: if you use some speech-enabled channel, you should read and use the Microsoft manual. And if you want your chatbot to hear your speech, use STT (speech to text).

Wrapping up

And this is how we created our AI chatbot at ElifTech Cool Projects Department. You can follow our simple instruction and integrate your own bot with Microsoft’s cognitive service LUIS. In the end, you’ll get an intent-based chatbot which takes action depending on the user’s intents.

Our next challenge is to make more than just a simple intent-based chatbot. We want it to be a good conversationalist. The AI chatbot should be able to manage and understand the context of the conversation. To reach our goal, we plan to apply more of the deep learning technologies. We’ll also try to use RASA for chat development. Their lead engineer, Tom Bocklisch, introduced a very inspiring approach at PyData Berlin 2017. We’ve got big plans and even more experiments ahead! Stay tuned!

This article was originally published on ElifTech Blog