The Voxco Answers Anything Blog
Read on for more in-depth content on the topics that matter and shape the world of research.
Inspire. Learn. Create.
Text Analytics & AI
AI & Open End Analysis
How to Choose the Right Solution
The Latest in Market Research
Market Research 101
Text Analytics & AI
Brand disambiguation using the Ascribe Coder API
Asking respondents to recall brand names is a common survey technique. Often this is done by asking the respondent to pick brands from a list. Such aided brand awareness questions can introduce bias. We can remove this bias by asking the respondent to type the brand name without providing a list. These unaided brand awareness questions can provide superior results, at the cost of trying to disambiguate the often incorrectly spelled brand names.In this post I will describe a technique to disambiguate brand mentions automatically. We will use a taxonomy to map brand misspellings to the correct brand name, then automatically code the responses using a codebook constructed from the taxonomy. This technique makes use of the Ascribe Coder API. It stores and automatically codes the brand mentions and returns the disambiguated brands to the API client. The API client can therefore use these corrected brand mentions for branch logic in the survey.
Summary of the Technique
The Ascribe Coder API can be used to automatically code responses. But coupled with Rule Sets it can also help automate curating brand mentions, improving the ease and accuracy of auto-coding.The approach we will take involves these key components:
- Manually build a taxonomy that groups brand misspellings to the correct brand names.
- Construct a Rule Set that maps incorrectly spelled brand mentions to the correct brand name using the taxonomy.
- Using the Ascribe Coder API:
- Setup up the study and questions in Ascribe.
- Use the taxonomy to populate the codebook of the unaided brand awareness question.
- While the survey is in the field, send new responses for the unaided brand awareness question to the API for storage, brand disambiguation, and automated coding.
Brand Disambiguation
Imagine you have an unaided brand awareness question: “What brand of beer did you most recently drink?” Let’s suppose one of the brands you want to identify is Bud Light. Your will get responses like:
- Bud Light
- Bud Lite
- Budweiser light
- Budwiser lite
- Bud lt
- Budlite
And many more. The creativity of respondents knows no bounds! Ideally you would like to both correctly code these responses as “Bud Light”, but also curate them such that they are all transcribed to “Bud Light”. How can we make this happen?
Building a Brand Taxonomy
Observe that the list of variations of Bud Light above can be thought of as a set of synonyms for Bud Light. We can make a taxonomy that maps each of these to Bud Light. Bud Light is the group, and the variations are the synonyms in that group. We can do this similarly for all the brands we are considering.
Obtaining the correct brand list
Wikipedia provides this list of brands and sub-brands for Budweiser:
- Budweiser
- Bud Light
- Bud Light Platinum
- Bud Light Apple
- Bud Light Lime
- Bud Light Lime-A-Ritas
- Budweiser Select
- Budweiser Select 55
- Budweiser 66
- Budweiser 1933 Repeal Reserve
- Bud Ice
- Bud Extra
- Budweiser/Bud Light Chelada
- Budweiser Prohibition Brew
- Budweiser NA
These become the groups in our taxonomy. We need to build synonyms for each group to capture the expected misspellings of the brand.
Creating the Synonyms
When building a taxonomy, it is good practice to start with the more specific brand names and progress to the less specific. I will demonstrate with the first three brands in the list above. Start with the most specific brand, “Bud Light Platinum”. We can construct a synonym to match this brand with these rules:
- The mention should contain three words
- The first word must start with “bud” (case insensitive)
- The second word must start with either “lig” or “lit”
- The third word must start with “plat”
- The words must be separated with at least one whitespace character
Let’s build a regular expression that conforms to these rules. Here is the portion of the regular expression that will match a word starting with “bud”:bud\w*The \w*
matches any number of word characters. The match pattern for the third word is constructed similarly.To match the second word, we want to match a word that starts with “lig” or “lit”:li[gt]\w*The [gt]
matches either the character “g” or “t”. Putting these three word patterns together gives:bud\w* li[gt]\w* plat\w*This will match any mention that has a word containing “bud” followed by a white space character, then a word that starts with “lig” or “lit”, followed by a white space character, then a word that starts with “plat”. This is not exactly what we want. This pattern will match:
- Bud Light Platinum
- Bud light platnum
- Budweiser lite platinum
But it will also match
- Redbud light platinum Stella Artois
To assure that we match mentions that contain only the synonym pattern we need to surround the regular expression with the start of string and end of string operators:^bud\w* li[gt]\w* plat\w*$Finally, we should tolerate any number of whitespace characters between the words. The expression \s+
will match one or more whitespace characters. Hence our finished synonym is:^bud\w*\s+li[gt]\w*\s+plat\w*$
Using Multiple Synonyms in a Group
We may well want to map the response “budlite platinum” to our group, but the synonym we created above will not do that. There is no space between “bud” and “lite”. We can fix this in one of two ways. First, we can try to make the synonym we created above also match this response. Second, we can make a new synonym to handle this case. For this simple example it is not hard to make the existing synonym also match this response, but in general it is better to add a new synonym rather than trying to make a single “one size fits all” synonym. The regular expression patterns are already hard enough to read without making them more complicated! Here is a second synonym that will do the job:^budli[gt]\w*\s+plat\w*$
Using the Group Process Order
We may well want to make our taxonomy match “Budweiser” if a response starts with “bud”, but only if it does not match any of the more specific groups. Groups with a lower Process Order are checked for matches before groups with higher process order. Groups with the same Process Order value are checked for matches in indeterminant order, so it is important to design the synonyms for groups with the same process order such that no two in different groups match the same string.We can create a Budweiser group to match any single word that starts with “bud” by giving it a single regular expression synonym with this pattern:^bud\w*$Assuming the other groups in the taxonomy have the default Process Order of 100, we can assign a Process Order to this group with any value greater than 100. This group will now match single words that start with “bud” only if no other group matches the string.
Creating the Rule Set
You create Rule Sets in Ascribe as a manual operation. We want to create a Rule Set that can be used by the Ascribe Coder API to disambiguate our brand mentions. Given our brand taxonomy, its job is to map the response provided by the respondent to the corrected brand name. Fortunately, our Rule Set can be used with any taxonomy, so we need only one Rule Set which can be used with any brand list taxonomy.Create a new Rule Set in Ascribe. It initially contains no rules. We want to populate it with a Modify Response on Load rule to map responses to the correct beer brand using our new taxonomy.Let’s make a Modify Response on Load rule to map the user specified brand to our curated brand. It looks like this:// Replace response with group
if (f.taxonomy) {
var group = f.taxonomy.Group(f.r);
if (group) {
f.r = group;
}
}This rule says: if a taxonomy is passed to the rule, map the response to a group using the taxonomy. If there is a resulting group, replace the response with the group. The result is that the misspelled brand mention is replaced with the correctly spelled brand name.
Using the Ascribe Coder API
Armed with this taxonomy and Rule Set we have the hard part done. Now we need to make use of it to automatically code responses. The Ascribe Coder API supports the use of Rule Sets, which in turn allow access to a taxonomy.
Setting up the Study from Survey Metadata
If you wish, you can use the Ascribe Coder API to create the study and questions in Ascribe from the survey metadata, as described in this post. Alternatively, you can create the study and questions in Ascribe using the Ascribe web site.
Query for Resources by ID
When we created the taxonomy and Rule Set in Ascribe, we gave each of them an ID. Via the API we can query the Taxonomies and RuleSets resources to find the key for each.For example, we can query for the taxonomy list with a GET tohttps://webservices.goascribe.com/coder/TaxonomiesThe JSON response has this form:{
"taxonomies": [
…
{
"key": 190,
"id": "Beers",
"description": "Beer brand disambiguation",
"countGroups": 35,
"countSynonyms": 47
},
…
],
"errors": null
}If we know that that we named our taxonomy “Beers” we now know that its key is 190. While the ID of the taxonomy may be changed by the user, the key will never change. It is therefore safe to store this key away for any future use of the taxonomy.Keys can be found for other objects from their ID in a similar fashion by querying the appropriate resource of the API. In this manner we can find the keys for our Rule Set, and for any study and question in that study given their respective IDs.
Creating the Codebook from the Taxonomy
We are using our taxonomy and Rule Set to disambiguate brands as responses are loaded. If we create the Codebook properly we can automatically code these corrected responses as they are loaded. As a bonus, we can use the taxonomy to create the codebook automatically.Once we have our taxonomy key, we can query the Taxonomies resource for the groups and synonyms if the taxonomy. The response body has this form:{
"groups": [
{
"key": 22265,
"name": "Bud Light Platinum",
"synonyms": [
{
"key": 84539,
"text": "^bud\\w*\\s+li[gt]\\w*\\s+plat\\w*$",
"isRegEx": true
},
{
"key": 84540,
"text": "^budli[gt]\\w*\\s+plat\\w*$",
"isRegEx": true
}
]
},
{
"key": 225,
"name": "Budweiser",
"synonyms": [
{
"key": 8290,
"text": "^bud\\w*$",
"isRegEx": true
}
]
}
],
"key": 190,
"id": "Beers",
"username": "cbaylis",
"description": "Beer brand disambiguation",
"errors": null
}Note that the group names are what we want as codes in our codebook. These are the correctly spelled brands. Now, to automatically code the corrected responses, all we need to do is provide a regular expression for each code in the codebook with the correct brand name, surrounded by the start and end of string operators, for example ^Budweiser$
.We POST to the Codebooks resource to create the codebook. The request body has this form:{
"codebook": [
{
"description": "Bud Light Platinum",
"regexPattern": "^Bud Light Platinum$"
},
{
"description": "Budweiser",
"regexPattern": "^Budweiser$"
}
]
}We have created our codebook from the taxonomy and have prepared it for regular expression coding of the correctly spelled brand names.As a defensive programming note, you should escape any regular expression operators that may appear in the brand name. This would include such characters as [.$*+?]
.
Loading and Automatically Coding Responses
We now have all the tools in place to load and automatically code responses. We can do this after data collection is completed, or in real time while the survey is in field.We can put the responses into an Ascribe study with a POST to the Responses resource of the API, as described here: https://webservices.goascribe.com/coder/Help/Api/POST-Responses-QuestionKey. In the body of the POST we send the responses, and specify our Rule Set and taxonomy, and that we want to automatically code responses using regular expression matching to the codes in the codebook. The body of the POST has this form:{
"responses": [
{
"rid": "100",
"verbatim": "budwiser",
"transcription": "budwiser"
},
{
"rid": "101",
"verbatim": "bud lite platnum",
"transcription": "bud lite platnum"
}
],
"autoCodeByRegex": true,
"ruleSetKey": 6,
"taxonomyKey": 190
}Note that we provide the text of the response for both the verbatim and transcription of each response. The combination of our taxonomy and Rule Set will change the verbatim to the corrected brand name. By including the original response in the transcription, it is available in Ascribe Coder to see the original response text.The response body has this form:{
"codebookKey": 1540228,
"responsesAdded": 2,
"existingResponsesUnchanged": 0,
"responsesModifiedByRuleSet": 2,
"responsesVetoedByRuleSet": 0,
"responsesCodedByTextMatch": 0,
"responsesCodedByInputId": 0,
"responsesCodedByRegex": 2,
"addedResponses": [
{
"rid": "100",
"codes": [
{
"codeKey": 915872,
"description": "Budweiser"
}
]
},
{
"rid": "101",
"codes": [
{
"codeKey": 915873,
"description": "Bud Light Platinum"
}
]
}
],
"errors": null
}The rid values in the response correspond to those in the request. We see that we have mapped the misspelled brand names to their correct spellings, and automatically applied the code for those corrected brand names. The codeKey values correspond to the code in our codebook.If you are using the Ascribe Coder API directly from the survey logic, the codeKey and/or description can be used for branching logic in the survey.
Summary
We have seen how to use the Ascribe Coder API in conjunction with a taxonomy to correct brand name misspellings and automatically code the responses. While we have limited our attention to brand names, this technique is applicable whenever short survey responses in a confined domain need to be corrected and coded.
11/2/18
Read more
Text Analytics & AI
Why Choose Ascribe for Text Analytics
Choosing the Right Text Analytics and Sentiment Analysis Tool to Analyze Your Open-Ended Responses
Open-ended responses provide some of the most valuable insights for guiding business decisions, yet are often underutilized (or outright ignored) when analyzing market research surveys and other datasets with open-ended or textual responses. By leaving this key data on the table, customer loyalty metrics, employee engagement data, innovation research, and more go untold. Critical opportunities are lost or—at best—acted on with limited insight.
Market researchers often report feeling frustrated that the data they’re collecting, and know to hold key understandings, goes unused—its potential untapped because to analyze would be a long, arduous process that they don’t have the time or resources to execute.
Ascribe CX Inspector was designed to meet the need for an efficient text analytics tool. With CX Inspector, gaining insight from verbatim comments is not only possible… It’s simple. Manually analyzing the open-ended responses sitting on your desk could take days or even weeks, but with Ascribe CX Inspector, a leading tool for sentiment analysis, that data becomes usable knowledge in a matter of minutes.
Analyze Large Amounts of Open-Ended Responses from Datasets Quickly and Easily
Ascribe CX Inspector is a customizable text analytics software that allows users to gather and sift through open-ended data quickly and easily. CX Inspector integrates structured and unstructured data using topic and sentiment analysis to better understand the responses provided in customer satisfaction, employee engagement, concept idea tests, and other valuable datasets.
X-Score
CX Inspector uses a patented tool called X-Score to efficiently measure satisfaction and sentiment and identify the key drivers of positive and negative sentiment, or satisfaction and dissatisfaction. A score between 0-100 indicates positive sentiment or satisfaction, while -100 to 0 indicates negative sentiment or dissatisfaction. X-Score helps businesses uncover crucial next steps for improving the score and driving real business results, and can be used to benchmark results over time.
Driven by Sentiment and Topic Analysis
CX Inspector’s interactive and customizable dashboards empower market researchers to explore their data by filtering through key variables to better understand who or what is being affected—zero in on individual comments or focus on the top-level themes that matter most to you and your business. You can also analyze and create reports, like crosstabs and co-occurrences, to uncover knowledge to improve your understanding of the data.
Ease of Use
Ascribe CX Inspector is a leading text analytics tool for insight professionals and market researchers who have large datasets with open-ended comments that they would like to analyze, but haven’t found the time, money, nor right solution to do so. CX Inspector empowers you to analyze thousands—even hundreds of thousands—of open-ended responses in minutes (compared to hours, days, or weeks).
CX Inspector includes user-friendly features such as:
- Interactive, customizable dashboard
- ‘Drag and Drop’ grouping capabilities
- Ability to click through and see individual comments
- Translation from 100 languages
- Ability to filter by any variable
- PII removal and profanity clean-up
- Import and export of data is quick and easy
Ascribe CX Inspector uses advanced AI and NLP to smoothly integrate data and generate swift and accurate results. And if data integration is an issue, Ascribe offers custom API services to support data importing.
How to Use CX Inspector for Text Analytics and Sentiment Analysis
As any business owner knows, it costs more to gain a new customer or onboard a new employee than it does to retain an existing one. Text analytics and sentiment analysis from CX Inspector bridges the gaps between your business and your customers or employees by uncovering key sentiment, engagement, and loyalty insights.
But CX Inspector isn’t just limited to customer satisfaction and employee engagement datasets—in fact, it can be used to analyze open-ended responses from:
- NPS studies
- VOC research
- Advertising copy tests
- Various tracking studies
- Innovation research
- Concept idea tests
- And more
Whatever your data represents, CX Inspector empowers you to ask the valuable open-ended questions that previously felt off-limits due to time-consuming, costly, and complicated analysis. Already swimming in a sea of data? Finally make it work for you by analyzing it in a matter of minutes, and interpret it in a variety of meaningful ways that support your business.
CX Inspector is Your Solution to Analyzing Open-Ended Responses
With key insights that could inform critical business decisions on the line, analyzing and understanding open-ended survey results is more important than ever. Whether you need to gauge customer satisfaction or employee engagement, test advertising copy or concept ideas, Ascribe CX Inspector is the text analytics tool for the job. Analyze your datasets in minutes, visualize and export results in whichever way is most meaningful, and uncover key insights to help guide your business decisions.
Analyze open-ended responses and gain key business insights with CX Inspector
https://www.youtube.com/watch?v=ng2NmoOwgc4
10/4/18
Read more
Text Analytics & AI
Using Swagger Documentation and Postman with the Ascribe APIs
The Ascribe APIs provide documentation both as static html pages and an interactive Swagger page. The static documentation is more convenient for looking over the API, but the Swagger documentation provides powerful features for interaction with the API. We will use examples from the Ascribe Coder API, but the same techniques apply to the Ascribe CXI API.
To open the documentation page for the API, navigate to the root URL of the API in your browser. For the Ascribe Coder API this is https://webservices.goascribe.com/coder. The top of the page looks like this:
Click the Documentation link in the toolbar to open the static help pages, or the Swagger link to open the Swagger documentation. The top of the Swagger documentation page looks like this:
The REST resources available are laid out on the page like this:
Two resources are shown here, Codebooks and Companies. Each of these has one GET operation and one POST operation. Click on the operation to show details. For example, if we click the POST operation for the Companies resource we see:
The example response produced by Swagger is a bit confusing. Swagger wraps the response example in an object that specifies the content type, in this case application/json. The actual response body is the value of this property, or in this case:
{
"key": 3,
"id": "ACME"
}
This POST operation accepts parameters in the body of the request, as described in the Parameters section in the operation above. To get details about the fields of the response or request bodies, click on the Model link above the example. In this case for the Response body we see:
And for the Request body we find:
Experimenting with the API from the Swagger page
You can experiment with the API directly from the Swagger page. To do so, the first step is to obtain a bearer token for authentication as detailed in this post. To obtain the token, POST to the Session resource. Expanding the POST operation of the Sessions resource we find:
Click on the request example to copy it to the request body:
Now edit the values for the request body, providing your credentials. I’ll log in to an Ascribe account called “Development”:
Now click the Try it Out! button. Scroll down to the Response Body section to find:
The API has provided two tokens, either of which can be used for client interactions with the API. You can elect to use either token, but don’t use both. The tokens have different semantics. The authenticationToken
should be passed in a header with key authentication
. The bearer token should be passed in a header with key authorization
. The word bearer
and the space following it should be included in the value of the authorization
header.
The authentication token will remain valid for thirty minutes after the last request to the API using this token. In other words, it will remain valid indefinitely provided you make a request with this token at least once every thirty minutes. The bearer token never expires, but will become invalid if the account, username, or password used to obtain the token change.
The bearer token (but not the authentication token) can be used for experimentation with the API from the Swagger page. To use the bearer token, copy it from the response body. It is long, so it is easiest to copy the response body and paste it into a text document. Then get rid of everything except the value of the bearer token. This means everything within the quotation marks, including the word bearer, but not including the quotation marks. You can save this value for future use so that you don’t have to go through this each time you experiment with the API.
Now, armed with your bearer token, paste it into the api_key
box at the top of the Swagger documentation page:
Now you are ready to work out with the API. For example, you can expand the GET operation of the Companies resource and click Try it Out!
Assuming you have one or more companies in your account, you will get a response like:
{
"companies": [
{
"key": 11288,
"id": "Language Logic"
},
{
"key": 11289,
"id": "Saas-fee rentals"
},
{
"key": 11291,
"id": "ACME"
}
],
"errors": null
}
Using Postman
The Swagger documentation page is handy for simple experimentations, but you will probably want a better API development tool as you develop your own API client. Postman is one such tool. You can easily import the API from Ascribe into Postman. To do so, open Postman and click the Import button in the toolbar:
In the Import dialog, select Import from link
. Paste in the URL from the toolbar of the Ascribe Swagger documentation page:
Click Import, and Postman will import the API like so:
Adding the bearer token
That did a lot of work for you, but you still must tell Postman about your bearer token. Edit the new Ascribe Coder API collection:
Select the Authorization tab and set the type to Bearer Token. In the Token box, paste the value of the bearer token without the leading word bearer
and the space following it:
Completing setup in Postman
Now you are ready to work out with the API in Postman. Most of the operations are set up for you, but there are a few details to be aware of.
First, those operations that require a request body will not have that body properly populated. You can either edit these by hand or copy the example body from the Swagger documentation into Postman as a starting point.
Second, be aware that Postman will create variables for operations that accept parameters in the request path and query string. An example is the GET /Studies operation. This operation accepts three parameters in the query portion of the url: idFilter
, statusMask
, and clientCompanyKey
. If we try this operation in Postman using initial configuration after importing the Coder API we get the response:
{
"studies": [],
"errors": null
}
Where are our studies? You might expect to see all the studies for your account listed. To understand why they are not listed, open the Postman console. In the View menu in Postman, select Show Postman Console:
Now, with the Postman console displayed, try the GET /Studies operation again. In the Postman console we see:
This is the actual GET request URL sent to the API. The request accepts three variables as parameters, and we have not supplied these variables. We can tell Postman not to send these parameters by opening the Params
section of the GET request in Postman:
Clear the checkbox to the left of each of these parameters, then send the request again. The Postman console now displays:
and the response body contains a list of all studies on our account. To query only for studies that are complete, set the statusMask
parameter value to c
:
The console shows that the request now contains the desired status mask:
and the response body contains only those studies that are complete.
Using Swagger to Generate the API Client
The Swagger documentation can also be used to generate the initial code for your API client automatically. Visit https://swagger.io/tools/swaggerhub/ for more information. The code generation tools on this site can create API client code in a wide variety of languages and frameworks, including Python, Ruby, Java, and C#.
Summary
The Swagger documentation for Ascribe APIs does far more than simply documenting the API. It can be used to create a test platform for API client development using Postman. It can also be used to generate stub code for your API client.
10/4/18
Read more
Text Analytics & AI
Rule Set Examples
An Ascribe Rule Set lets you programmatically alter the results of linguistic analysis. See these posts for information on what Rule Sets do and how to author them.
Modify Finding Rule Examples
Rule Sets let you modify the results of linguistic analysis to correct problems or tailor the analysis to your business vertical.
Correct Sentiment Polarity
Suppose you have analyzed survey responses about a proposed new product offering. You observe that the comment:
I disliked nothing about the product.
produces a finding of negative sentiment, with a topic of “nothing” and an expression of “disliked”:
Looking through the survey responses you find that there are a few examples of this problem, always with a topic of “nothing”. Of course, the topic “nothing” effectively reverses the sentiment finding, but the linguistic analysis engine was not smart enough to realize that. You can fix this problem with a simple Modify Finding rule:
// If topic is "nothing" reverse the sentiment polarity
if (f.s l== null && /^nothing$/i.test(f.t)) {
f.s = -f.s;
}
If the finding has sentiment and the topic is “nothing”, we reverse the sentiment polarity.
Uppercase Brand Name
In this example suppose our Inspection contains mentions of our brand. When our brand name appears as the topic we would like to see it in upper case. Here is a Modify Finding rule to do that:
// Uppercase our brand
f.t = f.t.replace(/\bascribe\b/ig, "Ascribe");
The regular expression matches the word “ascribe”. The \b
operators match word boundaries, so our regular expression matches the word “ascribe”, but not “ascribes”. The i
flag makes the expression ignore case, and the g
flag causes all occurrences of the match to be replaced.
This rule will correct the case of the topics but will not change the comment itself. If we wanted to do that we would need a Modify Response on Load rule.
Veto Finding Rule Example
If our business is windshield repair, comments from our customers will commonly mention broken windshields. The linguistic analyzer will merrily produce findings of negative sentiment to these mentions. But for us that’s not a negative thing, that’s our business! We may well want to discard findings of negative sentiment about broken windshields. We can use this Veto Finding rule:
// Discard negative sentiment findings for "broken"
if (f.s < 0) {
if (f.e == "broken") {
if (/glass|window|windshield/i.test(f.t)) {
return true;
}
}
}
Recall from Authoring Rule Sets that when a Veto Finding rule returns a Boolean
value of true
, the finding will be discarded from the analysis. The rule above will veto findings with an expression of “broken” and a topic that contains “glass”, “window”, or “windshield”.
Add Finding from Finding Rule Example
Let’s suppose that we are in the roof repair business. Two of the comments we received are:
- The repair of the roof is inadequate.
- The repair of the roof behind the chimney is inadequate.
Sentiment analysis correctly gives us a negative sentiment finding with topic of “repair” and expression of “inadequate” for the first comment, but not for the second. The more complex structure of the second comment caused it to miss the inadequate repair. But we really want to find these mentions of inadequate repair. That’s core to our business. What can we do? A simple approach would be to use an Add Finding from Response rule that looks for the words “repair” and “inadequate” in the response, and add a negative sentiment finding. But that approach is going to introduce a lot of incorrect findings, because it does not use any knowledge of the sentence structure. A better way would be to see whether the topic analysis could be used to improve our sentiment analysis results. Looking at the topic analysis we find that we do have a finding of “repair” and an expression of “is inadequate”. The topic analysis did a better job of finding inadequate repair than the sentiment analysis did. We can use an Add Finding from Finding rule to generate a new finding of negative sentiment from this topic finding:
// Add sentiment finding for inadequate repair
if (f.s == null) {
if (/repair/i.test(f.t)) {
if (/inadequate/i.test(f.e)) {
f.t = "repair";
f.e = "inadequate";
f.s = -2;
return f;
}
}
}
We first test whether this is a topic finding (f.s == null)
, if so we look for the desired topic and expression. If we find them, we return a negative sentiment finding topic of “repair” and an expression of “inadequate”.
Because this rule adds findings, we end up both the original topic finding and the new sentiment finding in our Inspection. If we used the same code as a Modify Finding rule we would end up with only the sentiment finding. The topic finding would have been converted to a sentiment finding. One approach is not superior to the other; the approach that is best for you depends on just how you prefer to modify the analysis.
Add Finding from Response Rule Examples
Add Finding from Response rules let you add findings to an Inspection independent of the linguistic analysis engine. As described in Authoring Ascribe Rule Sets, these rules let you augment the findings from linguistic analysis with findings you create.
One use for Add Finding from Response rules is to create “alerts” that find responses with some set of key words you want to flag.
Suppose we want to flag responses that may have reference to some legal action. We could look for responses that contain any of the words “lawyer”, “sue”, or “attorney”. If we find a response with one of these words we add a finding like so:
// Add alert finding for legal issues.
// Search for trouble words and add a new finding if one is found.
// Search pattern for problem words ignoring case.
var troublePattern = /\b(lawyer|sue|att?[oe]rne?y)\b/i;
// The array of match results
var arr;
// Check for a trouble word
if ((arr = troublePattern.exec(f.r)) != null) {
var word = arr[0]; // the captured word
// Setup the properties of the new finding
f.t = "Alert!"; // topic
f.e = word; // expression
f.x = word; // extract
// Returning a finding causes it to be added
return f;
}
Note that our regular expression also captures common misspellings of the word “attorney”. After the analysis it is simple to search for the topic “Alert!” and find all our alerts.
The rule above will add a single finding for the first problem word found. What if we want an alert for all the problem words? We can add multiple findings for a single response by returning an array of findings. If the array is empty, no findings will be added. Here is our modified rule:
// Add alert findings for legal issues.
// Search for trouble words and add a new finding for each.
// Search pattern for words with global flag.
var wordPattern = /\w+/g;
// Search pattern for problem words ignoring case.
var troublePattern = /\b(lawyer|sue|att?[oe]rne?y)\b/i;
// The set of new findings to return.
var findingsOut = [];
// Loop through each word in the response.
var arr;
while ((arr = wordPattern.exec(f.r)) != null) {
var word = arr[0]; // the next word
// Check for a trouble word
if (troublePattern.test(word)) {
// Create a new finding and set its properties
var newFinding = new Finding();
newFinding.r = f.r; // response
newFinding.t = "Alert!"; // topic
newFinding.e = word; // expression
newFinding.x = word; // extract
// Add the new finding to the set of findings to return
findingsOut.push(newFinding);
}
}
// Return the new findings
return findingsOut;
9/15/18
Read more
Text Analytics & AI
Authoring Ascribe Rule Sets
You use Ascribe™ Rule Sets to modify the output of the text analytics engine, introduce your own findings to the text analytics results, and to modify comments as they are loaded into Ascribe. Rule Sets are authored in JavaScript and require some knowledge of JavaScript to create them.
The structure of a Rule Set
A Rule Set has an ID, which is the name of the Rule Set. The ID must be unique among all Rule Sets in the Ascribe account. Rule Sets also have an Enabled property. If true, the Rule Set is available for use. If false, the Rule Set cannot be used.Rule Sets contain rules, with these types and purposes:
- Modify Finding: modify the findings of the linguistic analysis.
- Veto Finding: remove a finding from the linguistic analysis.
- Add Finding from Finding: insert new findings into the linguistic analysis by examination of the findings produced by the analysis.
- Add Finding from Response: insert new findings into the linguistic analysis
- Modify Response on Load: change the text of comments being loaded
- Class: code that can be used by any rule in the Rule Set
The first three rule types listed above operate on findings emitted by the linguistic analysis. Using these three types of Rules you can tune the results of the analysis to your needs. Add Finding from Response and Modify Response on Load rules operate on responses (or comments), independent of the linguistic analysis.Class rules are distinct from the other rule types. Class rules allow you to add code that can be used by any rule in the Rule Set.Each rule in the Rule Set, except for Class rules, also has an Enabled property. If disabled, the rule will be ignored when the Rule Set is executed.
Findings
A finding from the linguistic analysis engine has these properties:
- The comment that was analyzed to produce the finding. We refer to this interchangeably as the response, meaning the response to a survey question. In any case it is the text that was input to the linguistic analysis engine.
- The topic. For sentiment analysis this is the word or phrase about which sentiment was expressed. For topic analysis this is a topic mentioned in the comment, typically a noun or noun phrase. The topic may be empty in a sentiment finding. For example, the comment “It was awful” produces a finding of negative sentiment for which the topic is unknown.
- The expression. For sentiment analysis this is the expression of sentiment. The comment “The showers were terrible” produces a topic of “showers” and an expression of “terrible”. For topic analysis the expression is the word modifying the topic. The comment “I have worked out at the other gyms in the area” gives the topic “gym” and expression “have worked out at other”. For topic analysis the expression may be empty, when a topic is found without a modifying phrase.
- The extract. This is the segment of the comment that yielded the finding.
- The sentiment score. An integer value between -2 (strong negative) and +2 (strong positive). Topic findings have a null sentiment score.
A given comment analyzed by the linguistic engine may produce any number of findings, including zero.
Finding Type
When a rule is invoked it is passed a predefined object named f of type Finding. In JavaScript notation the Finding type would be defined as:class Finding {
r; // Response (comment) text (string)
t; // Topic (string)
e; // Expression (string)
x; // Extract (string)
s; // Sentiment score (number)
}An object f of type Finding also has these read-only properties:f.IsValid; // boolean
f.IsInvalid; // booleanThe IsInvalid property is true if any of f.t, f.e, and f.x are null, empty, or whitespace. The IsValid property always returns !IsInvalid.
Finding Sentiment
Note that the type of the sentiment score property s is numeric. The allowed range of f.s is integer values [-2, 2]. Therefore f.s has five allowed integer values and may also be null. If f.s is null it means there is no sentiment associated with the Finding. This is different than f.s == 0, which means a finding of neutral sentiment.If f.s is assigned an integer value outside the range [-2, 2] it is treated the same as f.s == null, or no finding of sentiment.Beware of type mismatch possibilities when assigning a value to f.s. Without an explicit type assignment to a numeric variable the implicit type is double, which will cause a type mismatch error when it is assigned to f.s. Examples:f.s = 1; // OK, the value is an integer
f.s = 1.5; // Invalid field assignment error
f.s = Math.floor(1.5); // OK, double value 1.5 has been converted to int
f.s = 5; // No error, but equivalent to f.s = null
Finding Constructors
The Finding type has two constructors. With no arguments a new Finding object is created with string properties set to an empty (zero length) string, and sentiment to null:var newFinding = new Finding();
// f.r == f.t == f.e == f.x == “”
// f.s == nullThe second constructor accepts a single argument of type Finding:var newFinding = new Finding(f); // properties of newFinding are the same as f
Rule Set Execution
When you load data into CX Inspector the entire workflow is executed. When you apply a Rule Set after data are loaded into an Inspection execution begins with the stored responses and findings, and the Modify Response on Load rules are not executed. When a Rule Set is used with the Ascribe Coder API only the Modify Response on Load rules are executed.See Ascribe Rule Set Execution Workflow for a detailed description of Rule Set execution.
Rule Programming Language
The programming language used in Rule Sets is JavaScript, conformant to the ECMAScript language specification. This is a very full-featured language, but for most purposes you will not need to learn about the advanced features of the language. The code for most Rules is very simple.
Rule Syntax and Semantics
In this section we will cover the syntax and semantics of all rule types except for Class rules. Class rules are not directly invoked during Rule Set execution and are described in a later section.Rules are the bodies of functions generated by the Rule Set compiler. If we have a Modify Finding rule:// If topic is "nothing" reverse the sentiment polarityif (f.s !== null && /^nothing$/i.test(f.t)) {
f.s = -f.s;
}The compiler generates this code from the rule:function _Rule9155(f) {
//{-- Modify Finding rule
if (f.s !== null && /^nothing$/i.test(f.t)) {
f.s = -f.s;
}return f;
//--} Modify Finding rule}The compiler has placed the rule body within a function with a single argument: the finding. It has also added a return f; statement at the end of the function body. You can inspect the source code generated by the Rule Set compiler by opening the Rule Set and clicking the Print icon.
Modify Finding Rules
Modify Finding rules participate in the pipeline of rules described in [workflow link]:Modify Finding ⇒ Veto Finding ⇒ Add Finding from FindingThe compiler introduces areturn f;statement at the end of the rule body. This return statement is not introduced by the compiler for other rule types, except for Modify Response on Load rules.The rule can change the properties of the finding f. The finding returned by the rule is passed to the next rule in the pipeline.The finding returned by the rule will be ignored unless a valid finding is returned. When a finding is ignored for this reason it is as if the rule had not executed. The finding originally passed to the rule is passed to the next rule in the pipeline. The finding returned by a Modify Finding rule is ignored if:
- The finding returned is null,
- or any of the properties t, e, x is null or whitespace.
The rule may change the value f.r, but any changes to the property will be ignored. A Modify Finding rule cannot cause the text of the response to change.
Veto Finding Rules
Veto Finding rules also participate in the pipeline of rules described in [workflow link]:Modify Finding ⇒ Veto Finding ⇒ Add Finding from FindingVeto Finding rules can veto the finding and cause it to be discarded by returning true. A vetoed finding is discarded from the analysis and the remaining rules in the pipeline are short circuited for that finding.While a Veto Finding rule can modify the properties of the finding f, any such changes have no effect on the finding in the pipeline. Only the value returned by the rule is considered. If rule returns true the finding is vetoed, otherwise the rule has no effect.The rule must return a Boolean true value to veto the finding. Returning a truthy value such as 1 or “veto” will not veto the finding. Examples:return true; // vetoed
return 1; // not vetoed
return !0; // vetoed
Add Finding from Finding Rules
Add Finding from Finding rules are the last part of the pipeline of rules described in Ascribe Rule Set Execution Workflow:Modify Finding ⇒ Veto Finding ⇒ Add Finding from FindingThese rules can add additional findings to the analysis, by inspection of the finding from the linguistic analysis. Each rule can add up to 1000 findings. If the rule returns a single valid finding, that finding is added to the analysis. If the rule returns an array of findings, all the valid findings returned are added to the analysis.A finding returned by an Add Finding from Finding rule is ignored if:
- The finding is null,
- or any of its properties t, e, x is null or whitespace.
An Add Finding from Finding with no return statement will do nothing. The rule is free to modify the properties of the finding f and return that object. The value returned will be added to the analysis, but the original finding passed to the rule will not be affected. Add Finding from Finding rules cannot affect the finding that they are passed.Note that the trivial rule:return f;will add duplicates of every finding produced by the linguistic engine to the analysis, doubling the number of findings!To add multiple findings to the analysis, return an array of Finding objects. This Add Finding from Finding rule will add five new Findings to the analysis, one with each of the allowed integer sentiment scores. The other properties of the new findings will be the same as f:var newFindings = []; // create an empty array
for (var s = -2; s <= 2; s++) {
var nf = new Finding(f); // clone f
nf.s = s;
newFindings.push(nf);
}
return newFindings;
Add Finding from Response Rules
As described in Ascribe Rule Set Execution Workflow, Add Finding from Response rules execute independently of other rule types. The are not part of the pipeline:Modify Finding ⇒ Veto Finding ⇒ Add Finding from FindingInstead, Add Finding from Response rules operate on all the responses (comments) in the analyzed variables in the Inspection. These rules let you augment the analysis with findings created by your Rule Set.Like Add Finding from Finding rules, Add Finding from Response rules can add up to 1000 new findings to the analysis. Add Finding from Response rules use the same semantics for adding findings as Add Finding from Finding rules. Returning a single Finding object whose IsValid property is true will add that finding to the analysis. Returning an array of Finding objects will add all those whose IsValid property is true to the analysis.The Finding object f passed to an Add Finding from Response rule has only its f.r property set to the text of the response. The properties f.t, f.e, f.x, and f.s are all null. Hence you must set f.t, f.e, and f.x to valid values to add a new finding.
Modify Response on Load Rules
As described in Ascribe Rule Set Execution Workflow, Modify Response on Load rules execute in a different part of the workflow than other rule types. These rules allow you to modify the response text before it is stored in Ascribe. Therefore, these rules can be used to curate the response text, perhaps to remove personally identifiable information, or to correct spelling error. These rules can also veto a response, causing it to be discarded and not stored in Ascribe.Like Add Finding from Response rules, the Finding object f passed to Modify Response on Load rules have only the property f.r populated. It contains the text of the response being loaded.The compiler introduces areturn f;statement at the end of the rule body. This return statement is not introduced by the compiler for other rule types, except for Modify Finding rules.If the rule returns a finding, and if the r property of the finding is not null or whitespace, that text will be stored in Ascribe as the response text. Therefore, an empty rule will store the response unchanged, because of the implicit return f;statement at the end of the rule. If the rule returns anything other than an object of type Finding the response will be discarded and not loaded. All these statements will cause the response to be discarded:return false;
return true:
return null;
return;If the property r of the Finding returned is null or whitespace the response will not be discarded, but the text of the response will not be modified. The rule performs no action.Modify Response on Load rules cannot introduce additional responses. Returning an array of findings will not add multiple responses. Instead, it will cause the response to be discarded, because the rule did not return an object of type Finding.
Class Rules
Class rules are not invoked directly by the Rule Set execution workflow. They allow you to write code that can be used by any rule in your Rule Set.To author Class rules you will need an understanding of the JScript language.Class Rules are so named because you will often implement one or more JavaScript classes to be used by other rules. However, a Class rule is simply JavaScript inserted at the global level in the source code. For an in-depth discussion of Class rules see Using Class Rules in an Ascribe Rule Set.
Summary
Authoring a Rule Set requires knowledge of the JavaScript programming language, and the syntax and semantics described in this post. You can use Rule Sets to tailor your text analyses to your specific needs.Also see these related posts: Introduction to Ascribe Rule Sets, Testing Ascribe Rule Sets, Ascribe Rule Set Execution Workflow.
8/25/18
Read more
Text Analytics & AI
Introduction to Ascribe Rule Sets
Using Ascribe™ Rule Sets you can tailor the results of text analytics to your specific needs. You can do amazing things with Rule Sets, such as:
- Modify the finding produced by linguistic analysis, for example changing sentiment scores based on keywords.
- Remove findings from the analysis to discard unwanted topics or expressions.
- Add new findings to the analysis, for example to aid in alerting.
- Modify the comments prior to performing text analytics, for example to obfuscate telephone numbers.
Let’s dig into what a Rule Set looks like and how we can use them.
The structure of a Rule Set
A Rule Set has an ID, which is the name of the Rule Set. The ID must be unique among all Rule Sets in the Ascribe account. Rule Sets also have an Enabled property. If true, the Rule Set is available for use. If false, the Rule Set cannot be used.Rule Sets contain Rules, with these types and purposes:
- Modify Finding: modify the findings of the linguistic analysis.
- Veto Finding: remove a finding from the linguistic analysis.
- Add Finding from Finding: insert new findings into the linguistic analysis by examination of the findings produced by the analysis.
- Add Finding from Response: insert new findings into the linguistic analysis
- Modify Response on Load: change the text of comments being loaded
The first three Rule types listed above operate on findings emitted by the linguistic analysis. Using these three types of Rules you can tune the results of the analysis to your needs.The other two Rule types operate on responses (or comments), independent of the linguistic analysis.Each Rule in the Rule Set also has an enabled property. If disabled, the Rule will be ignored when the Rule Set is used.
Rule Set Execution
Rule Sets work on one response or one finding at a time.Modify Response on Load rules and Add Finding from Response rules work on the responses, one response at a time. These rules do not depend on the other rule types.The other rule types operate as a pipeline:Modify Finding ⇒ Veto Finding ⇒ Add Finding from FindingAscribe passes all findings from linguistic analysis through the pipe and stores the finding that emerge as the finished analysis.See Ascribe Rule Set Execution Workflow for a complete description of the workflow.
Authoring Rule Sets
You write rules in JavaScript. This is the programming language used by web browsers, and is one of the most common languages in use today.You can’t author a Rule Set without knowing JavaScript, but more programmers know JavaScript than any other language.
Using Rule Sets
While you need to know JavaScript to author a Rule Set, you don’t need to know anything about programming to use them. You can use a Rule Set in Ascribe in any of these ways:
- When loading data in CX Inspector, you can use a Rule Set to modify responses as they are loaded, and to tune the text analytics results.
- After you have loaded data into an Inspection, you can run a Rule Set without the need to perform text analysis again.
- When you load data using the Ascribe Coder API, you can use a Rule Set to modify the responses as they are loaded, and to automatically code the responses.
Next Steps
See Ascribe Rule Set Execution Workflow for a detailed understanding of how Rule Sets process data. See Authoring Ascribe Rule Sets to learn how to create a Rule Set.
8/24/18
Read more
Text Analytics & AI
Ascribe Rule Set Execution Workflow
You use Ascribe™ Rule Sets to modify the output of the text analytics engine, introduce your own findings to the text analytics results, and to modify comments as they are loaded into Ascribe. You can also use Rule Sets with the Ascribe Coder API to modify responses as they are loaded to Ascribe.
Rule Set Workflow
The diagram below shows the execution workflow for a Rule Set:
When you load data into CX Inspector the entire workflow is executed. When you apply a Rule Set after data are loaded into an Inspection execution begins with the stored responses and findings, and the Modify Response on Load rules are not executed. When you use a Rule Set with the Ascribe Coder API only the Modify Response on Load rules are executed.
Rules Operating on Responses
Modify Response on Load rules and Add Finding from Response rules operate on responses. They are not passed any of the output from linguistic analysis.
Modify Response on Load Rules
Ascribe executes Modify Response on Load rules only when you load data. You can modify the responses as they are loaded, and the modified responses will be stored in Ascribe. You can also veto responses, causing them to be ignored and not loaded in Ascribe.
Ascribe executes these rules in the order they are defined in the Rule Set. The output of each rule is passed to the next, and the output of the last rule is stored in Ascribe. If any rule vetoes a response the response is discarded, and any remaining rules are not executed for that response.
Add Finding from Response Rules
Ascribe executes Add Finding from Response rules both when you load data, and when you apply a Rule Set to an Inspection after loading data.
These rules contribute findings to the analysis, just like the linguistic processing engine does. You use these rules to supplement the analysis with findings you create in your rules. Ascribe sends all the responses for the variables you have selected for analysis through these rules. Therefore, you can add findings to the analysis even for responses that produced no findings from linguistic analysis. Contrast this with Add Finding from Finding rules, which operate on the findings from the linguistic analysis.
When you add findings to an analysis with these rules, those added findings are not passed through the rules operating on findings.
Rules Operating on Findings
Modify Finding, Veto Finding, and Add Findings from Finding rules operate on findings from linguistic analysis. They work as a pipeline, where each finding is passed through all the enabled rules for each of these Rule types:
Modify Finding ⇒ Veto Finding ⇒ Add Finding from Finding
CX Inspector stores the findings emitted from this pipeline, plus the findings contributed by Add Finding from Response rules, as the processed findings. CX Inspector displays the processed findings in its user interface.
Modify Finding Rules
Ascribe executes Modify Finding rules in the order they are defined in the Rule Set. Each rule can modify the finding or pass it along unchanged. The output of each rule passes to the next, and the output of the last rule passes to the Veto Finding rules.
Veto Finding Rules
Veto Finding rules cannot modify the finding. The rule can veto the finding or allow it to continue through the pipeline. If the finding is vetoed the finding is discarded and the remaining rules in the pipeline are short circuited for that finding.
Add Finding from Finding
You can add new findings to the analysis based on the results of linguistic processing with Add Finding from Finding rules. These rules cannot modify the finding they are given, but they can add one or more new findings. Because these rules have access to the finding from linguistic analysis they can make decisions using these results.
Although the rule has access to the response text, it should not add findings based on inspection of response alone. First, the rule will not receive responses unless the linguistic engine produced a finding for the response. Second, the rule will likely receive findings for the same response multiple times, because many responses generate multiple findings during linguistic analysis.
Next Steps
You can best harness the power of Rule Sets by understanding their execution workflow. See Authoring Ascribe Rule Sets for information about how to create rules, and Rule Set Examples for sample code.
8/24/18
Read more
Text Analytics & AI
Automating Coding Workflow with the Ascribe Coder API
The Ascribe™ Coder API allows your company to send survey responses to Ascribe and retrieve back the codebooks and coded responses. Using the Ascribe Coder API you can create a study, create questions in the study, load responses to be coded to the question, and get the results of the coding.Before you can interact with the Ascribe Coder API you must authenticate. See this post for details.The API interactions for this workflow are:
- POST /StudiesCreate a new study. Its key is returned.
- POST /Questions/{StudyKey}Create a new question in the study. Its key is returned.
- POST /Responses/{QuestionKey}Put responses to be coded into the question.
- Humans code the responses, creating a codebook as they code.
- GET /Studies/{StudyKey}Query for completion of the coding work. Repeat at intervals until coding is complete.
- GET /Codebooks/Question/{QuestionKey}Get the codebook constructed by the coders.
- GET /Responses/{QuestionKey}Get the coding results.
Let’s go through each of these operations a step at a time.
Create a Study with the Ascribe Coder API
Assuming that we want to create a new study in Ascribe we POST to the /Studies resource. The body of the request looks like this:{
"id": "S458-3"
}This creates a new study with the ID S458-3. The ID can be any text. The API responds with:{
"key": 544,
"id": "S458-3",
"status": "p"
}The status of "p"
means that coding is in progress. This is an indication to the coding team that this job needs to be coded. The key returned is important. That is how we will refer to the study in subsequent interactions with the API.If we don’t want to create a study, but would rather add a new question to an existing study, we can query the /Studies resource to find the study key given the study ID.
Create a Question with the Ascribe Coder API
Armed with our study key, 544 in the example above, we can create a question in the study. To create the question, we POST to the /Questions/{StudyKey} resource, so in our case we POST to /Questions/544. The body of the request looks like this:{
"id": "Q1",
"type": "o",
"text": "What did you like about the product?"
}This creates a new Open question ("type": "o"
), with ID “Q1”. The response has this form:{
"key": 3487,
"id": "Q1",
"type": "o",
"text": "What did you like about the product?"
}Now we have the key for the newly created question, and we are ready to load it with responses to be coded.
Load Responses to Code with the Ascribe Coder API
To load responses, we POST to the /Responses/{QuestionKey} resource, so for the example above we would POST to /Responses/3487. The body of the request would look like this:{
"responses": [
{
"rid": "34-22907",
"verbatim": "I do not like the red color"
},
{
"rid": "34-22988",
"verbatim": "This is a fantastic product"
}
]
}And the response:{
"responsesAdded": 2,
"existingResponsesUnchanged": 0
}Each response in the array of responses sent to the API must have the rid
(respondent ID) and verbatim
properties set to strings of one character or more. The rid
values must be unique within the array of responses. If the rid
value for a given response is already present in the question in Ascribe the response will be ignored. If we do exactly the same POST again to the API, the response would be:{
"responsesAdded": 0,
"existingResponsesUnchanged": 2
}This is because the respondent ID’s are already present in the question.You should limit the number of responses sent in a single POST to 1000 or so. If you have more responses to load into the question than that, send them in separate POST operations.
Human Coding
The coding team can now get to work on the question. As the coders do their work they will add codes to the codebook for the question. Once coding of all the questions in the study is complete the coding team will set the status of the study to Completed.When the coding team has set the status of the study to Completed it is the signal that the coding results are available for retrieval via the API.
Polling for Completion of Coding with the Ascribe Coder API
You can test for completion of the coding work with a GET to the /Studies/{StudyKey} resource. Our study key is 544, so we need to GET from /Studies/544. The response looks like this:{
"key": 544,
"id": "S458-3",
"status": "p"
}Whoops! The coding is not yet complete, because the status is "p"
, meaning coding in progress. Wait for a while and try again, until the status is "c"
, meaning completed. Now we can get the coding results.
Retrieving a Codebook with the Ascribe Coder API
We need two things to collect the results of coding. The codebook and the codes applied to each response. We can retrieve these in either order. To get the codebook we GET from the /Codebooks/Question/{QuestionKey} resource. In our example we would GET /Codebooks/Question/3487. The response has this form:{
"codebookKey": 1499,
"codebook": [
{
"key": 528,
"description": "Not like red",
"outputId": "201"
},
{
"key": 529,
"description": "Fantastic product",
"outputId": "202"
}
]
}This codebook contains only two codes. Each has a key
, textual description
as displayed in the codebook, and an outputId
. The output ID identifies the code for the tab department. But to associate these codes with the coding results from the API we want the code keys, 528 and 529.
Retrieving Coding Results with the Ascribe Coder API
The final operation in our workflow is to get the results of the coding. We GET from the /Responses/{QuestionKey} resource. The response has this form:{
"responses": [
{
"rid": "34-22907",
"codeKeys": [
528
]
},
{
"rid": "34-22988",
"codeKeys": [
529
]
}
]
}The coders have applied code 528, “Not like red”, to respondent "34-22907". 528 is the key of the code in the codebook. The other respondent has received the “Fantastic product” code. While each response in this example has only one code applied, in general any number of code keys can be returned in the codeKeys
arrays.Note that the text of the responses is not included in the response to reduce the payload size. If you want the response text you can set the includeText
property in the request body to true
. The response would then look like this:{
"responses": [
{
"rid": "34-22907",
"text": "I do not like the red color",
"codeKeys": [
528
]
},
{
"rid": "34-22988",
"text": "This is a fantastic product",
"codeKeys": [
529
]
}
]
}
Summary
The Ascribe Coder API can be used to automate the workflow in your coding department. You can send responses to be coded to Ascribe and retrieve the results of the coding operation.See the Ascribe Coder API documentation for information on how to automate coding of responses as they are sent to Ascribe.
8/21/18
Read more