Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/336.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 将StringBuffer值从外部类传递到主类_Java_String_Class_Stringbuffer - Fatal编程技术网

Java 将StringBuffer值从外部类传递到主类

Java 将StringBuffer值从外部类传递到主类,java,string,class,stringbuffer,Java,String,Class,Stringbuffer,我正在努力将下面爬虫类中StringBuffer的值传递给main类: import java.util.Scanner; import java.util.ArrayList; import java.util.*; import java.io.*; import java.net.URL; public class Crawler { public int threads = 0; public int results = 0; public String output;

我正在努力将下面爬虫类中StringBuffer的值传递给main类:

import java.util.Scanner; 
import java.util.ArrayList; 
import java.util.*;
import java.io.*;
import java.net.URL;

public class Crawler
{
 public int threads = 0; 
 public int results = 0; 
 public String output; //this is the string I will pass the buffer value to

public void crawler(String startingURL) 
{ 
 ArrayList<String> listOfPendingURLs = new ArrayList<String>(); 
 ArrayList<String> listOfTraversedURLs = new ArrayList<String>(); 

 listOfPendingURLs.add(startingURL); //Add the starting URL to a list named listOfPendingURLs

 while (!listOfPendingURLs.isEmpty() && //while listOfPendingURLs is not empty
    listOfTraversedURLs.size() <= 100) 
   { 
    String urlString = listOfPendingURLs.remove(0); //Remove a URL from listOfPendingURLs

  if (!listOfTraversedURLs.contains(urlString)) //if this URL is not in listOfTraversedURLs
  { 
      listOfTraversedURLs.add(urlString); //Add it to listOfTraversedURLs
      System.out.println("Craw " + urlString); //Display this URL -- change this to display on the panel

      try 
      {
         URL oURL = new URL(urlString);
         BufferedReader in = new BufferedReader(
            new InputStreamReader(oURL.openStream()));
         StringBuffer strbuf = new StringBuffer();
         String lines;
         while ((lines = in.readLine()) != null) 
            strbuf.append(lines); //I want to pass html source code to the main from here

            output = strbuf.toString(); //convert to string
            strbuf.delete(0,strbuf.length());//empty the buffer
            results++;//GUI statistics variable - for future use

         in.close();
      } 

      catch (Exception e) 
      {
         e.printStackTrace();
      }

  for (String s: getSubURLs(urlString)) { //Read the page from this URL and for each URL contained in the page
      if (!listOfTraversedURLs.contains(s))
      listOfPendingURLs.add(s); //Add it to listOfPendingURLs if it is not is listOfTraversedURLs
 } //Exit the while loop when the size of S is equal to 100
 } 
 } 
 }

public static ArrayList<String> getSubURLs(String urlString) { 
 ArrayList<String> list = new ArrayList<String>(); 

 try { 
    java.net.URL url = new java.net.URL(urlString); 
    Scanner input = new Scanner(url.openStream()); 
    int current = 0; 

    while (input.hasNext()) 
    { 
      String line = input.nextLine(); 
      current = line.indexOf("http:", current);


    while (current > 0) 
    { 
      int endIndex = line.indexOf("\"", current);

      if (endIndex > 0) { // Ensure that a correct URL is found 
      list.add(line.substring(current, endIndex)); 
      current = line.indexOf("http:", endIndex); 
 } 

 else 
 current = -1;

 } 
 } 
 } 

 catch (Exception ex) 
 { 
   System.out.println("Error: " + ex.getMessage()); 
 } 

 return list; 
 } 
}

由于某些原因,它不会将缓冲区读取为null,也不会显示源代码。。。有什么想法吗?我一直在尝试许多变化,但似乎没有一个有效

调整爬虫方法以在调用时返回字符串,如:

public String crawler(String startingURL) {
    String result;
    listOfPendingURLs = new ArrayList<String>();
    listOfTraversedURLs = new ArrayList<String>();

    result = "";
    listOfPendingURLs.add(startingURL);
    while (!listOfPendingURLs.isEmpty()
            && listOfTraversedURLs.size() <= 100) {
        String urlString = listOfPendingURLs.remove(0);
        if (!listOfTraversedURLs.contains(urlString)) {
            listOfTraversedURLs.add(urlString);
            // TODO display on panel instead
            System.out.println("Craw " + urlString);
            try {
                URL oURL = new URL(urlString);
                BufferedReader in = new BufferedReader(
                        new InputStreamReader(oURL.openStream()));
                StringBuffer strbuf = new StringBuffer();
                String lines;
                while ((lines = in.readLine()) != null) {
                    strbuf.append(lines);
                }
                result = result + strbuf.toString(); // convert to string
                strbuf.delete(0, strbuf.length());// empty the buffer
                results++;// GUI statistics variable - for future use

                in.close();
            }

            catch (Exception e) {
                e.printStackTrace();
            }

            for (String s : getSubURLs(urlString)) { // Read the page from
                                                        // this URL and for
                                                        // each URL
                                                        // contained in the
                                                        // page
                if (!listOfTraversedURLs.contains(s))
                    listOfPendingURLs.add(s); // Add it to listOfPendingURLs
                                                // if it is not is
                                                // listOfTraversedURLs
            } 
        }
    }
    return result;
}

为什么不从方法调用返回缓冲区,而不是让类互相查看呢?!我觉得你的代码很混乱。您似乎正在丢弃StringBuffer的所有内容,而StringBuffer应该是StringBuilder,不是吗?在while循环的每次迭代中。您确定要执行此操作吗?我可能要抛出整个文档,然后将其输出到main。不,您正在读取StringBuffer,将其传递给字符串,然后丢弃这两个。为什么?我要怎么丢弃它们?我要失明了。。。对不起,伙计,截止日期很快就到了,我没有足够的睡眠。我以前试过,伙计。它不输出任何东西。我不知道为什么会发生这种情况……我还放置了StringBuffer strbuf=newStringBuffer;和弦线;外面的尝试,也尝试了其他的瓦里亚人,但仍然没有…顺便说一句。它与System.out.printlnresult一起工作;如果在里面的话!listOfTraversedURLs.ContainsSurlStrings对其进行了双重检查,需要将其放在内部!listOfPendingURLs.isEmpty&&//,而listOfPendingURLs不是空的listOfTraversedURLs.size
public String crawler(String startingURL) {
    String result;
    listOfPendingURLs = new ArrayList<String>();
    listOfTraversedURLs = new ArrayList<String>();

    result = "";
    listOfPendingURLs.add(startingURL);
    while (!listOfPendingURLs.isEmpty()
            && listOfTraversedURLs.size() <= 100) {
        String urlString = listOfPendingURLs.remove(0);
        if (!listOfTraversedURLs.contains(urlString)) {
            listOfTraversedURLs.add(urlString);
            // TODO display on panel instead
            System.out.println("Craw " + urlString);
            try {
                URL oURL = new URL(urlString);
                BufferedReader in = new BufferedReader(
                        new InputStreamReader(oURL.openStream()));
                StringBuffer strbuf = new StringBuffer();
                String lines;
                while ((lines = in.readLine()) != null) {
                    strbuf.append(lines);
                }
                result = result + strbuf.toString(); // convert to string
                strbuf.delete(0, strbuf.length());// empty the buffer
                results++;// GUI statistics variable - for future use

                in.close();
            }

            catch (Exception e) {
                e.printStackTrace();
            }

            for (String s : getSubURLs(urlString)) { // Read the page from
                                                        // this URL and for
                                                        // each URL
                                                        // contained in the
                                                        // page
                if (!listOfTraversedURLs.contains(s))
                    listOfPendingURLs.add(s); // Add it to listOfPendingURLs
                                                // if it is not is
                                                // listOfTraversedURLs
            } 
        }
    }
    return result;
}