Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/351.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么NLU不提取整个文本数据_Python_Ibm Watson_Watson Nlu - Fatal编程技术网

Python 为什么NLU不提取整个文本数据

Python 为什么NLU不提取整个文本数据,python,ibm-watson,watson-nlu,Python,Ibm Watson,Watson Nlu,我觉得NLU没有识别出我提供的全部数据。我是否在代码中做错了什么,或者对api应该如何工作有错误的假设?其中包括来自api的响应,其中包含已分析的代码以及提交的全文。有一个三角洲,我不确定为什么会这样 这是我的密码: def nlu(text): print("Calling NLU") url = "https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-2

我觉得NLU没有识别出我提供的全部数据。我是否在代码中做错了什么,或者对api应该如何工作有错误的假设?其中包括来自api的响应,其中包含已分析的代码以及提交的全文。有一个三角洲,我不确定为什么会这样

这是我的密码:

def nlu(text):
print("Calling NLU")
url = "https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-27"
data = {
    'text': text,
    'language': "en",
    'return_analyzed_text': True,
    'clean': True,
    'features': {
        'entities': {
            'emotion': True,
            'sentiment': True,
            'limit': 2
        },
        "concepts": {
            "limit": 15
        },
        'keywords': {
            'emotion': True,
            'sentiment': True,
            'limit': 2
        }
    }
}
headers = {
    'content-type': "application/json"
}
username = os.getenv("nlu-username")
password = os.getenv("nlu-password")
print("NLU", username, password)
print("data", json.dumps(data))
response = requests.request("POST", url, data=json.dumps(data), headers=headers, auth=(username, password))
print("Done calling NLU")
print(response.text)
以下是请求/响应:

"keywords": [
{
  "text": "anthropologists study skeletons",
  "sentiment": {
    "score": 0.0
  },"analyzed_text": "move between two thousand eight and two thousand twelve archaeologists excavated the rubble of an ancient hospital in England in the process they uncovered a number of skeletons one in particular belong to a wealthy Mel who lived in the eleventh or twelfth century and died of leprosy between the ages of eighteen and twenty five how do we know all this simply by examining some old soil Kate bones even centuries after death skeletons carry unique features that tell us about their identities and using modern tools and techniques we can read those features as clues this is a branch of science known as biological anthropology it allows researchers to piece together details about Incheon individuals and identify historical events that affected whole populations when researchers uncover a skeleton some of the first clues they gather like age and gender line its morphology which is the structure appearance and size of a skeleton mostly the clavicle stop growing at age twenty five so a skeleton with the clavicle that hasn't fully formed must be younger than similarly the plates in the cranium can continue fusing up to age forty and sometimes beyond by combining these with some microscopic skeletal clues physical anthropologists can estimate an approximate age of death meanwhile pelvic bones reveal gender biologically female palaces are wider allowing women to give birth whereas males are narrower those also betrayed the signs of aging disease disorders like anemia leave their traces on the bones and the condition of teeth can reveal clues to factors like diet and malnutrition which sometimes correlate with wealth or poverty a protein called collagen can give us even more profound details the air we breathe water we drink and food we eat leaves permanent traces in our bones and teeth in the form of chemical compounds these compounds contain measurable quantities called isotopes stable isotopes in bone collagen and tooth enamel varies among mammals dependent on where they lived and what they eat so but analyzing these isotopes we can draw direct inferences regarding the diet and location of historic people not only that but during life bones undergo a constant cycle of remodeling so if someone moves from one place to another bones synthesized after that move will also reflect the new isotopic signatures of the surrounding environment that means that skeletons can be used like migratory maps for instance between one and six fifty A. D. the great city of TOT Makana Mexico bustled with thousands of people researchers examined the isotope ratios and skeletons to the now which held details of their diets when they were young they found evidence for significant migration into the city a majority of the individuals were born elsewhere with further geological and skeletal analysis they may be able to map where those people came from that work in tier two Akon is also an example of how bio anthropologists study skeletons in cemeteries and mass graves and analyze their similarities and differences from not information they can learn about cultural beliefs social norms wars and what caused their deaths today we use these tools to answer big questions about how forces like migration and disease shape the modern world DNA analysis is even possible in some relatively well preserved ancient remains that's helping us understand how diseases like tuberculosis have evolved over the centuries so we can build better treatments for people today ocean skeletons can tell us a surprisingly great deal about the past two of your remains are someday buried intact what might archaeologists of the distant future learn from them"

我刚刚用你的短信试了一下NLU,得到了正确的回复。检查以下结果。我想你应该先用你的服务凭证试试。它还将帮助您修复API调用中任何错误放置的头或丢失的参数

注意:只需删除参数对象中的“元数据”:{},然后再进行POST调用,因为它是用于URL和HTML的

{
"semantic_roles": [{
    "subject": {
        "text": "anthropologists"
    },
    "sentence": "anthropologists study skeletons",
    "object": {
        "text": "skeletons"
    },
    "action": {
        "verb": {
            "text": "study",
            "tense": "present"
        },
        "text": "study",
        "normalized": "study"
    }
}],
"language": "en",
"keywords": [{
        "text": "anthropologists",
        "relevance": 0.966464
    },
    {
        "text": "skeletons",
        "relevance": 0.896147
    }
],
"entities": [],
"concepts": [{
    "text": "Cultural studies",
    "relevance": 0.86926,
    "dbpedia_resource": "http://dbpedia.org/resource/Cultural_studies"
}],
"categories": [{
        "score": 0.927751,
        "label": "/science/social science/anthropology"
    },
    {
        "score": 0.219365,
        "label": "/education/homework and study tips"
    },
    {
        "score": 0.128377,
        "label": "/science"
    }
],
"warnings": [
    "emotion: cannot locate keyphrase",
    "relations: Not Found",
    "sentiment: cannot locate keyphrase"
]
}

在您的代码中

data=json.dumps(数据)

将整个JSON对象转换为字符串。这应该是:

data=data

此外,我建议使用,因为它会使您更容易

与上面的示例相同

import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
import watson_developer_cloud.natural_language_understanding.features.v1 as Features

username = os.getenv("nlu-username")
password = os.getenv("nlu-password")

nluv1 = NaturalLanguageUnderstandingV1(
    username=username, 
    password=password)

features = [ 
    Features.Entities(), 
    Features.Concepts(),
    Features.Keywords()
]

def nlu(text):
    print('Calling NLU')
    response = nluv1.analyze(text,features=features, language='en')
    print('Done calling NLU')
    print(json.dumps(response, indent=2))

很好的建议,我很乐意这样做,但是SDK由于各种原因未能安装软件包。为什么不在api文档中作为示例发布所需的请求/响应?问题不在于WDC SDK。这是six和OSX/python 2.7的一个已知安装问题。无论如何,上面的示例代码是Python3.6,它可以很好地安装SDK。感谢您的建议。有没有办法将这些依赖项从SDK中重构出来,或者将客户机拆分为可以单独导入和安装的独立依赖项?您列出的站点非常有用。它和官方文件有联系吗?你怎么找到的?我想是因为在我发布到SO的第二天它就开始工作了,所以才推出了一个修复程序。大多数API调用都链接到了官方文档。有些服务已被弃用,但它们的API调用在该应用程序中仍然可用,如Alchemy。对于NLU,大约一周前,在看到您的查询之前,我检查了这个应用程序的API调用。对我来说,它就在那里,非常酷。我甚至没有看到这个存在,直到你发送它-谢谢分享。