Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/apache/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache 没有父目录不提升到父目录。_Apache_Tomcat_Batch File_Web - Fatal编程技术网

Apache 没有父目录不提升到父目录。

Apache 没有父目录不提升到父目录。,apache,tomcat,batch-file,web,Apache,Tomcat,Batch File,Web,我拼凑了这个-它在Win8中起作用 请注意,它会告诉您网站是否响应了一条消息——它不会检查它所服务的页面是正常操作页面还是错误消息 @echo off if "%~1"=="" ( echo %0 www.url.com echo Checks the status of the URL pause goto :EOF ) >"%temp%\geturl.vbs" echo Set objArgs = WScript.Arguments >>"%temp%\geturl.

我拼凑了这个-它在Win8中起作用

请注意,它会告诉您网站是否响应了一条消息——它不会检查它所服务的页面是正常操作页面还是错误消息

@echo off
if "%~1"=="" (
echo %0 www.url.com
echo Checks the status of the URL
pause
goto :EOF
)


 >"%temp%\geturl.vbs" echo Set objArgs = WScript.Arguments
>>"%temp%\geturl.vbs" echo url = objArgs(0)
>>"%temp%\geturl.vbs" echo pix = objArgs(1)
>>"%temp%\geturl.vbs" echo With CreateObject("MSXML2.XMLHTTP")
>>"%temp%\geturl.vbs" echo .open "GET", url, False
>>"%temp%\geturl.vbs" echo .send
>>"%temp%\geturl.vbs" echo a = .ResponseBody
>>"%temp%\geturl.vbs" echo End With
>>"%temp%\geturl.vbs" echo With CreateObject("ADODB.Stream")
>>"%temp%\geturl.vbs" echo .Type = 1 'adTypeBinary
>>"%temp%\geturl.vbs" echo .Mode = 3 'adModeReadWrite
>>"%temp%\geturl.vbs" echo .Open
>>"%temp%\geturl.vbs" echo .Write a
>>"%temp%\geturl.vbs" echo .SaveToFile pix, 2 'adSaveCreateOverwrite
>>"%temp%\geturl.vbs" echo .Close
>>"%temp%\geturl.vbs" echo End With


cscript /nologo "%temp%\geturl.vbs" http://%1 url.htm 2>nul 
if not exist url.htm (
echo site is down or access is denied
) else (
for %%a in (url.htm) do if %%~za GTR 0 echo site is up
del url.htm
)
del "%temp%\geturl.vbs"
pause

您可能应该考虑使用类似于
curl
wget
的方法来实现这一点,具体取决于您如何定义“网站启动”。此外,考虑使用PowerShell.check BITSADMIN -它是一个本地Windows命令下载。但是你可以阅读从网站的响应。我不想添加一个第三方软件,如果我可以。我有一个PHP的方法。但是我不知道如何让php和批处理工作..这个脚本在哪里geturl.vbsIt是由批处理文件创建的。空白线之间的部分。
wget --timeout=5 --tries=1 --quiet --spider http://google.com >nul 2>&1 && echo site is up || echo site is down
GNU Wget 1.8.2, a non-interactive network retriever. Usage: wget [OPTION]... [URL]... Mandatory arguments to long options are mandatory for short options too. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc'-style command. Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE download URLs found in FILE. -F, --force-html treat input file as HTML. -B, --base=URL prepends URL to relative links in -F -i file. --sslcertfile=FILE optional client certificate. --sslcertkey=KEYFILE optional keyfile for this certificate. --egd-file=FILE file name of the EGD socket. Download: --bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host. -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files or use .# suffixes. -c, --continue resume getting a partially-downloaded file. --progress=TYPE select progress gauge type. -N, --timestamping don't re-retrieve files unless newer than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. --waitretry=SECONDS wait 1...SECONDS between retries of a retrieval. --random-wait wait from 0...2*WAIT secs between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. --limit-rate=RATE limit download rate to RATE. Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. HTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). -E, --html-extension save all text/html documents with .html extension. --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. --referer=URL include `Referer: URL' header in HTTP request. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. --no-http-keep-alive disable HTTP keep-alive (persistent connections). --cookies=off don't use cookies. --load-cookies=FILE load cookies from FILE before session. --save-cookies=FILE save cookies to FILE after session. FTP options: -nr, --dont-remove-listing don't remove `.listing' files. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. --retr-symlinks when recursing, get linked-to files (not dirs). Recursive retrieval: -r, --recursive recursive web-suck -- use with care! -l, --level=NUMBER maximum recursion depth (inf or 0 for infinite). --delete-after delete files locally after downloading them. -k, --convert-links convert non-relative links to relative. -K, --backup-converted before converting file X, back up as X.orig. -m, --mirror shortcut option equivalent to -r -N -l inf -nr. -p, --page-requisites get all images, etc. needed to display HTML page. Recursive accept/reject: -A, --accept=LIST comma-separated list of accepted extensions. -R, --reject=LIST comma-separated list of rejected extensions. -D, --domains=LIST comma-separated list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. --follow-ftp follow FTP links from HTML documents. --follow-tags=LIST comma-separated list of followed HTML tags. -G, --ignore-tags=LIST comma-separated list of ignored HTML tags. -H, --span-hosts go to foreign hosts when recursive. -L, --relative follow relative links only. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -np, --no-parent don't ascend to the parent directory.
@echo off
if "%~1"=="" (
echo %0 www.url.com
echo Checks the status of the URL
pause
goto :EOF
)


 >"%temp%\geturl.vbs" echo Set objArgs = WScript.Arguments
>>"%temp%\geturl.vbs" echo url = objArgs(0)
>>"%temp%\geturl.vbs" echo pix = objArgs(1)
>>"%temp%\geturl.vbs" echo With CreateObject("MSXML2.XMLHTTP")
>>"%temp%\geturl.vbs" echo .open "GET", url, False
>>"%temp%\geturl.vbs" echo .send
>>"%temp%\geturl.vbs" echo a = .ResponseBody
>>"%temp%\geturl.vbs" echo End With
>>"%temp%\geturl.vbs" echo With CreateObject("ADODB.Stream")
>>"%temp%\geturl.vbs" echo .Type = 1 'adTypeBinary
>>"%temp%\geturl.vbs" echo .Mode = 3 'adModeReadWrite
>>"%temp%\geturl.vbs" echo .Open
>>"%temp%\geturl.vbs" echo .Write a
>>"%temp%\geturl.vbs" echo .SaveToFile pix, 2 'adSaveCreateOverwrite
>>"%temp%\geturl.vbs" echo .Close
>>"%temp%\geturl.vbs" echo End With


cscript /nologo "%temp%\geturl.vbs" http://%1 url.htm 2>nul 
if not exist url.htm (
echo site is down or access is denied
) else (
for %%a in (url.htm) do if %%~za GTR 0 echo site is up
del url.htm
)
del "%temp%\geturl.vbs"
pause