Python 如何将数据从Jetson nano发送到Arduino?

Python 如何将数据从Jetson nano发送到Arduino?,python,arduino,pyserial,nvidia-jetson,nvidia-jetson-nano,Python,Arduino,Pyserial,Nvidia Jetson,Nvidia Jetson Nano,我正试图通过USB与Arduino Uno进行串行通信来连接NVIDIA Jetson Nano,因此当我的相机连接到Jetson Nano时,检测到一个物体,LED会亮起,但它不工作。我想我的arduino没有收到来自jetson的任何数据。 如果有人能帮我提建议,或者给出答案,那就太好了。这是我为arduino和jetson nano编写的代码: 阿杜伊诺: char data; int LED=13; void setup() { Serial.begin(9600); pinMode(

我正试图通过USB与Arduino Uno进行串行通信来连接NVIDIA Jetson Nano,因此当我的相机连接到Jetson Nano时,检测到一个物体,LED会亮起,但它不工作。我想我的arduino没有收到来自jetson的任何数据。 如果有人能帮我提建议,或者给出答案,那就太好了。这是我为arduino和jetson nano编写的代码:

阿杜伊诺:

char data;
int LED=13;
void setup() {

Serial.begin(9600);
pinMode(LED, OUTPUT);
digitalWrite(LED, LOW);

}

void loop() {

 if (Serial.available() ) { 
  data= Serial.read();
 }
 if(data == 'Y' || data == 'y') {
  digitalWrite(LED, HIGH);
    delay(5000);            
 }
}
杰森纳米:

#!/usr/bin/python
import jetson.inference
import jetson.utils
import time
import serial
import argparse
import sys

# configure the serial connections (the parameters differs on the device you are connecting to)
ser = serial.Serial(port='/dev/ttyUSB0',baudrate=9600)

# parse the command line
parser = argparse.ArgumentParser(description="Locate objects in a live camera stream using an object detection DNN.", formatter_class=argparse.RawTextHelpFormatter,epilog=jetson.inference.detectNet.Usage())
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")
parser.add_argument("--overlay", type=str, default="box,labels,conf", help="detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are:  'box', 'labels', 'conf', 'none'")
parser.add_argument("--threshold", type=float, default=0.5, help="minimum detection threshold to use") 
parser.add_argument("--camera", type=str, default="0", help="index of the MIPI CSI camera to use (e.g. CSI camera 0)\nor for VL42 cameras, the /dev/video device to use.\nby default, MIPI CSI camera 0 will be used.")
parser.add_argument("--width", type=int, default=1280, help="desired width of camera stream (default is 1280 pixels)")
parser.add_argument("--height", type=int, default=720, help="desired height of camera stream (default is 720 pixels)")

try:
    opt = parser.parse_known_args()[0]
except:
    print("")
    parser.print_help()
    sys.exit(0)

 # load the object detection network
 net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)

 # create the camera and display
 camera = jetson.utils.gstCamera(opt.width, opt.height, opt.camera)
 display = jetson.utils.glDisplay()

# process frames until user exits
while display.IsOpen():
# capture the image
img, width, height = camera.CaptureRGBA()

# detect objects in the image (with overlay)
detections = net.Detect(img, width, height, opt.overlay)

# print the detections
print("detected {:d} objects in image".format(len(detections)))

for detection in detections:
    print(detection)

# render the image
display.RenderOnce(img, width, height)

# update the title bar
display.SetTitle("{:s} | Network {:.0f} FPS".format(opt.network, net.GetNetworkFPS()))

# print out performance info
net.PrintProfilerTimes()


if (detections > 0):

    ser = serial.Serial(port='/dev/ttyUSB0',baudrate=9600)
    time.sleep(2)
    print(ser)
    ser.write('Y')

正如艾伦·埃尔金之前提到的。你需要指出问题所在。尽管如此,我还是会对您的问题进行如下调试:

  • 仅检查两个设备之间的串行通信。我建议您删除python脚本中的任何其他逻辑。(使用摄像机进行物体检测)

  • 查看您的连接设置。常见的问题是:两个设备没有共同的GND,逻辑电平不同(大多数Arduinos工作在5V,Jetson Nano工作在3.3V)

  • 查看串行通信配置、波特率、奇偶校验位。。等

  • 如果您有示波器,请检查发送数据的引脚。你应该看到信号在切换

希望这能给你一个如何识别问题的提示

你说的“不工作”是什么意思?请提供有关您的问题的详细信息,以及您在发布之前尝试了什么。请检查一个问题以了解更多详细信息。