为什么Cygnus没有使用MongoDB连接到另一个虚拟机?

为什么Cygnus没有使用MongoDB连接到另一个虚拟机?,mongodb,fiware-cygnus,Mongodb,Fiware Cygnus,早上好 我有以下一组虚拟机: 虚拟机A 猎户座和天鹅座的通用使能器 IP:10.10.0.10 虚拟机B 蒙哥达 IP:10.10.0.17 天鹅座的结构是: /usr/cygnus/conf/cygnus_instance_mongodb.conf ##### # # Configuration file for apache-flume # ##### # Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U #

早上好

我有以下一组虚拟机:

  • 虚拟机A
    • 猎户座和天鹅座的通用使能器
    • IP:10.10.0.10
  • 虚拟机B
    • 蒙哥达
    • IP:10.10.0.17
天鹅座的结构是:

/usr/cygnus/conf/cygnus_instance_mongodb.conf

#####
#
# Configuration file for apache-flume
#
#####
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
# 
# This file is part of fiware-connectors (FI-WARE project).
# 
# cosmos-injector is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# cosmos-injector is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
# 
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
# 
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es

# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus

# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf

# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongodb.conf

# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters 
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent

# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=cygnus.log

# Administration port. Must be unique per instance
ADMIN_PORT=8081

# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
#####
#
# Copyright 2014 Telefónica Investigación y Desarrollo, S.A.U
# 
# This file is part of fiware-connectors (FI-WARE project).
# 
# fiware-connectors is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-connectors is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
# 
# You should have received a copy of the GNU Affero General Public License along with fiware-connectors. If not, see
# http://www.gnu.org/licenses/.
# 
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es

#=============================================
# To be put in APACHE_FLUME_HOME/conf/agent.conf
#
# General configuration template explaining how to setup a sink of each of the available types (HDFS, CKAN, MySQL).

#=============================================
# The next tree fields set the sources, sinks and channels used by Cygnus. You could use different names than the
# ones suggested below, but in that case make sure you keep coherence in properties names along the configuration file.
# Regarding sinks, you can use multiple types at the same time; the only requirement is to provide a channel for each
# one of them (this example shows how to configure 3 sink types at the same time). Even, you can define more than one
# sink of the same type and sharing the channel in order to improve the performance (this is like having
# multi-threading).
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel

#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# Timestamp interceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# Destination extractor interceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf

# ============================================
# OrionMongoSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# true if the grouping feature is enabled for this sink, false otherwise
cygnusagent.sinks.mongo-sink.enable_grouping = false
# the FQDN/IP address where the MySQL server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_host = 10.10.0.17:27017
# a valid user in the MongoDB server
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above
cygnusagent.sinks.mongo-sink.mongo_password = 
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = hvds_
# prefix for the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = hvds_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# specify if the sink will use a single collection for each service path, for each entity or for each attribute
cygnusagent.sinks.mongo-sink.data_model = collection-per-entity  
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mongo-sink.attr_persistence = column

#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
执行以下步骤时:

我始终订阅要保存的传感器和数据:

(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
    {
    "entities": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor003"
        }
    ],
    "attributes": [
        "potencia_max",
        "potencia_min",
        "coste",
        "co2"
    ],
    "reference": "http://localhost:5050/notify",
    "duration": "P1M",
    "notifyConditions": [
        {
            "type": "ONTIMEINTERVAL",
            "condValues": [
                "PT5S"
            ]
        }
    ]
}
EOF

(卷曲)http://10.10.0.10:1026/NGSI10/subscribeContext -s-s--header'Content-Type:application/json'--header'Accept:application/json'-d@-| python-mjson.tool)有几件事对于调试这一点很有用。特别是,猎户座和天鹅座都是原木。关于天鹅座,它们必须在
/var/log/Cygnus/Cygnus.log
下。如果日志较大,请使用pastebin链接。感谢Cygnus日志。它显示数据正在被接收,并且显然是持久的。然而,如果您在MongoDB上没有看到任何东西,则表示连接有问题。你能在调试模式下运行Cygnus以了解Cygnus和MongoDB之间的连接到底发生了什么吗?调试模式下的日志没有显示错误,但是像这样的行:
time=07-01-2016T17:32:53.844set | lvl=DEBUG | trans=1452184258-718-0000000000 | function=createCollection | comp=Cygnus | msg=com.telefonica.iot.Cygnus.backends.mongo.MongoBackend[103]:集合已经存在,没有要创建的内容
。这意味着MongoDB驱动程序工作正常,即使在过去的执行中已经创建了集合。特别是,猎户座和天鹅座都是原木。关于天鹅座,它们必须在
/var/log/Cygnus/Cygnus.log
下。如果日志较大,请使用pastebin链接。感谢Cygnus日志。它显示数据正在被接收,并且显然是持久的。然而,如果您在MongoDB上没有看到任何东西,则表示连接有问题。你能在调试模式下运行Cygnus以了解Cygnus和MongoDB之间的连接到底发生了什么吗?调试模式下的日志没有显示错误,但是像这样的行:
time=07-01-2016T17:32:53.844set | lvl=DEBUG | trans=1452184258-718-0000000000 | function=createCollection | comp=Cygnus | msg=com.telefonica.iot.Cygnus.backends.mongo.MongoBackend[103]:集合已经存在,没有要创建的内容
。这意味着MongoDB驱动程序工作正常,即使在过去的执行中已经创建了集合。
(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "contextElements": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor003",
            "attributes": [
                {
                    "name":"potencia_max",
                    "type":"float",
                    "value":"1000"
                },
                {
                    "name":"potencia_min",
                    "type":"float",
                    "value":"200"
                },
                {
                    "name":"coste",
                    "type":"float",
                    "value":"0.24"
                },
                {
                    "name":"co2",
                    "type":"float",
                    "value":"12"
                }
            ]
        }
    ],
    "updateAction": "APPEND"
}
EOF
(curl http://10.10.0.10:1026/NGSI10/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "entities": [{
        "type": "Sensor",
        "isPattern": "false",
        "id": "sensor005"
    }],
    "attributes": [
        "muestreo"
    ],
    "reference": "http://localhost:5050/notify",
    "duration": "P1M",
    "notifyConditions": [{
        "type": "ONCHANGE",
        "condValues": [
            "muestreo"
        ]
    }],
    "throttling": "PT1S"
}
EOF
(curl http://10.10.0.10:1026/NGSI10/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d @- | python -mjson.tool) <<EOF
{
    "contextElements": [
        {
            "type": "Sensor",
            "isPattern": "false",
            "id": "sensor005",
            "attributes": [
                {
                    "name":"magnitud",
                    "type":"string",
                    "value":"energia"
                },
                {
                    "name":"unidad",
                    "type":"string",
                    "value":"Kw"
                },
                {
                    "name":"tipo",
                    "type":"string",
                    "value":"electrico"
                },
                {
                    "name":"valido",
                    "type":"boolean",
                    "value":"true"
                },
                {
                    "name":"muestreo",
                    "type":"hora/kw",
                    "value": {
                        "tiempo": [
                            "10:00:31",
                            "10:00:32",
                            "10:00:33",
                            "10:00:34",
                            "10:00:35",
                            "10:00:36",
                            "10:00:37",
                            "10:00:38",
                            "10:00:39",
                            "10:00:40",
                            "10:00:41",
                            "10:00:42",
                            "10:00:43",
                            "10:00:44",
                            "10:00:45",
                            "10:00:46",
                            "10:00:47",
                            "10:00:48",
                            "10:00:49",
                            "10:00:50",
                            "10:00:51",
                            "10:00:52",
                            "10:00:53",
                            "10:00:54",
                            "10:00:55",
                            "10:00:56",
                            "10:00:57",
                            "10:00:58",
                            "10:00:59",
                            "10:01:60"
                        ],
                        "kw": [
                            "200",
                            "201",
                            "200",
                            "200",
                            "195",
                            "192",
                            "190",
                            "189",
                            "195",
                            "200",
                            "205",
                            "210",
                            "207",
                            "205",
                            "209",
                            "212",
                            "215",
                            "220",
                            "225",
                            "230",
                            "250",
                            "255",
                            "245",
                            "242",
                            "243",
                            "240",
                            "220",
                            "210",
                            "200",
                            "200"
                        ]
                    }
                }
            ]
        }
    ],
    "updateAction": "APPEND"
}
EOF