Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/300.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 博士后表现_Python_Django_Postgresql_Twisted_Postgis - Fatal编程技术网

Python 博士后表现

Python 博士后表现,python,django,postgresql,twisted,postgis,Python,Django,Postgresql,Twisted,Postgis,我需要关于postgresql性能的帮助 我已经配置了我的postgresql文件来优化我的服务器,但是速度很慢,cpu资源高于120% 我不知道如何解决这个问题,我试图在谷歌上搜索更多信息,但这还不够,我也尝试过自动真空句子和reindex db,但速度仍然很慢 我的应用程序是一个gps监听器,使用python twisted上开发的tcp服务器每分钟插入6000多条记录,没有问题,问题是当我试图在一段时间内跟踪地图上的gps设备时,我每6秒从django应用程序向数据库查询一次,使用存储过程

我需要关于postgresql性能的帮助

我已经配置了我的postgresql文件来优化我的服务器,但是速度很慢,cpu资源高于120%

我不知道如何解决这个问题,我试图在谷歌上搜索更多信息,但这还不够,我也尝试过自动真空句子和reindex db,但速度仍然很慢

我的应用程序是一个gps监听器,使用python twisted上开发的tcp服务器每分钟插入6000多条记录,没有问题,问题是当我试图在一段时间内跟踪地图上的gps设备时,我每6秒从django应用程序向数据库查询一次,使用存储过程请求最后位置,但是查询在50多台设备上运行缓慢,cpu开始使用超过120%的资源

Django应用程序直接连接postgres数据库,设备的tcp侦听器服务器使用pgbouncer以线程方式连接数据库,我没有在pgbouncer上使用我的Django web应用程序,因为我不想使pgbouncer上的gps设备连接崩溃

我希望你能帮助我取得更好的成绩

我附上了我的存储过程,我的配置文件和我的cpu,内存信息

存储过程

$cat/etc/postgresql/9.1/main/postgresql.conf

$cat/etc/pgbouncer/pgbouncer.ini

$free-h

$cat/proc/cpuinfo

永远不要这样做。存储过程只是迭代地构建结果集。实际上,您正在为每个目标设备运行一个查询。相反,请使用任意+行\号:

我手头没有Postgres和您的数据库来验证解决方案的正确性,但我认为您应该能够调试到它所在的位置。至少,这是一个应该遵循的方向

有关详细信息,请参阅和

<>为了提高响应性,只考虑获取位置更新。因此,与其每6秒获取一次所有历史记录,不如在pageload/app init/whatever中获取一次,而不是只轮询服务器以获取更新。内部查询类似于:

    SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
    FROM gpstracking_device_tracks dt --added alias
    WHERE dt.imei = ANY(arr)
    AND dt.date_time_process >= @last_poll_timestamp
    AND dt.date_time_process <= NOW()
永远不要这样做。存储过程只是迭代地构建结果集。实际上,您正在为每个目标设备运行一个查询。相反,请使用任意+行\号:

我手头没有Postgres和您的数据库来验证解决方案的正确性,但我认为您应该能够调试到它所在的位置。至少,这是一个应该遵循的方向

有关详细信息,请参阅和

<>为了提高响应性,只考虑获取位置更新。因此,与其每6秒获取一次所有历史记录,不如在pageload/app init/whatever中获取一次,而不是只轮询服务器以获取更新。内部查询类似于:

    SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
    FROM gpstracking_device_tracks dt --added alias
    WHERE dt.imei = ANY(arr)
    AND dt.date_time_process >= @last_poll_timestamp
    AND dt.date_time_process <= NOW()

我正在尝试使用它,但是rnumber在where子句上不可用,这使我成为一个错误。修复它的最简单方法是将它包装成一个包含rnumber=1的select,并将其中的rnumber=1应用到外部select,如下所示:select*from查询在此处使用,其中rnumber=1。但我不确定这种方法的性能。更新了答案。我选择它作为答案,因为响应确实增加了well@J0HN要获取每个组中的第一条记录,postgres DISTINCT ON可能比ROW_NUMBER快。DatabaseError at/tracking/positions/request_live/column last_poll_时间戳不存在第39行:…D gpstracking_device_tracks.date_time_process>=@last_poll_…我是尝试使用它,但rnumber在where子句上不可用,这使我产生了一个错误修复它的最简单方法是将它包装成一个包含式select,并将其中rnumber=1应用到外部select,如下所示:select*from查询在此处使用,其中rnumber=1。但我不确定这种方法的性能。更新了答案。我选择它作为答案,因为响应确实增加了well@J0HN要获取每个组中的第一条记录,postgres DISTINCT ON可以快于行号。DatabaseError at/tracking/positions/request\u live/column last\u poll\u时间戳不存在第39行:…D gpstracking\u device\u tracks.date\u time\u process>=@last\u poll\u96MBworkmem似乎非常高,当您有多个用户时,您肯定无法确保每个用户执行排序操作所需的时间是96Mb的几倍,而这样的80 conn意味着RAM至少需要7.5 Go。你应该给我们gpstracking\u device\u tracks上的索引你可以在函数中添加查询的解释分析吗?顺便问一下,你有gpstracking\u device\u tracks.date\u time\u进程的索引吗?我已经解决了这个问题,是的,我也认为太多了,但我不是dba,我只是运用我作为编码员的知识,试图理解一些pg tunning页面96Mb workmem似乎非常高,当您有多个用户时,您肯定无法确保每个用户都可以花费96Mb的几倍来执行排序操作,而这样的80 conn意味着至少7.5 Go的RAM。您应该给我们提供gpstracking\u device\u tracksCan上可用的索引,您可以在func中添加查询的解释分析
操作?顺便问一下,你有关于gpstracking\u device\u tracks.date\u time\u process的索引吗?我已经解决了这个问题,是的,我也认为这太多了,但我不是dba,我只是运用我作为编码员的知识来尝试理解它和一些pg tunning页面
data_directory = '/var/lib/postgresql/9.1/main'         # use data in another directory
hba_file = '/etc/postgresql/9.1/main/pg_hba.conf'       # host-based authentication file
ident_file = '/etc/postgresql/9.1/main/pg_ident.conf'   # ident configuration file
external_pid_file = '/var/run/postgresql/9.1-main.pid'          # write an extra PID file
listen_addresses = 'localhost'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 80                    # (change requires restart)
superuser_reserved_connections = 3      # (change requires restart)
unix_socket_directory = '/var/run/postgresql'           # (change requires restart)
#unix_socket_group = ''                 # (change requires restart)
#unix_socket_permissions = 0777         # begin with 0 to use octal notation
#bonjour = off                          # advertise server via Bonjour
#bonjour_name = ''                      # defaults to the computer name
ssl = true                              # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'      # allowed SSL ciphers
#ssl_renegotiation_limit = 512MB        # amount of data between renegotiations
#password_encryption = on
#db_user_namespace = off
#krb_server_keyfile = ''
#krb_srvname = 'postgres'               # (Kerberos only)
#krb_caseins_users = off
#tcp_keepalives_idle = 0                # TCP_KEEPIDLE, in seconds;
#tcp_keepalives_interval = 0            # TCP_KEEPINTVL, in seconds;
#tcp_keepalives_count = 0               # TCP_KEEPCNT;
# shared_buffers = 4096MB                       # min 128kB
temp_buffers = 16MB                     # min 800kB
# work_mem = 80MB                               # min 64kB
# maintenance_work_mem = 2048MB         # min 1MB
max_stack_depth = 4MB                   # min 100kB
#max_files_per_process = 1000           # min 25
#vacuum_cost_delay = 0ms                # 0-100 milliseconds
#vacuum_cost_page_hit = 1               # 0-10000 credits
#vacuum_cost_page_miss = 10             # 0-10000 credits
#vacuum_cost_page_dirty = 20            # 0-10000 credits
#vacuum_cost_limit = 200                # 1-10000 credits
#bgwriter_delay = 200ms                 # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/round
#effective_io_concurrency = 1           # 1-1000. 0 disables prefetching
#wal_level = minimal                    # minimal, archive, or hot_standby
#fsync = on                             # turns forced synchronization on or off
#synchronous_commit = on                # synchronization level; on, off, or local
#wal_sync_method = fsync                # the default is the first option
#full_page_writes = on                  # recover from partial page writes
#wal_buffers = -1                       # min 32kB, -1 sets based on shared_buffers
#wal_writer_delay = 200ms               # 1-10000 milliseconds
#commit_delay = 0                       # range 0-100000, in microseconds
#commit_siblings = 5                    # range 1-1000
# checkpoint_segments = 64              # in logfile segments, min 1, 16MB each
checkpoint_timeout = 5min               # range 30s-1h
# checkpoint_completion_target = 0.5    # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s               # 0 disables
#archive_mode = off             # allows archiving to be done
#archive_command = ''           # command to use to archive a logfile segment
#archive_timeout = 0            # force a logfile segment switch after this
#max_wal_senders = 0            # max number of walsender processes
#wal_sender_delay = 1s          # walsender cycle time, 1-10000 milliseconds
#wal_keep_segments = 0          # in logfile segments, 16MB each; 0 disables
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed
#replication_timeout = 60s      # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
#hot_standby = off                      # "on" allows queries during recovery
#max_standby_archive_delay = 30s        # max delay before canceling queries
#max_standby_streaming_delay = 30s      # max delay before canceling queries
#wal_receiver_status_interval = 10s     # send replies at least this often
#hot_standby_feedback = off             # send info from standby to prevent
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#seq_page_cost = 1.0                    # measured on an arbitrary scale
#random_page_cost = 4.0                 # same scale as above
cpu_tuple_cost = 0.01                   # same scale as above
cpu_index_tuple_cost = 0.005            # same scale as above
#cpu_operator_cost = 0.0025             # same scale as above
# effective_cache_size = 8192MB
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5                        # range 1-10
#geqo_pool_size = 0                     # selects default based on effort
#geqo_generations = 0                   # selects default based on effort
#geqo_selection_bias = 2.0              # range 1.5-2.0
#geqo_seed = 0.0                        # range 0.0-1.0
#default_statistics_target = 100        # range 1-10000
#constraint_exclusion = partition       # on, off, or partition
#cursor_tuple_fraction = 0.1            # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8                # 1 disables collapsing of explicit
#log_destination = 'stderr'             # Valid values are combinations of
#logging_collector = off                # Enable capturing of stderr and csvlog
# These are only used if logging_collector is on:
#log_directory = 'pg_log'               # directory where log files are written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,
#log_file_mode = 0600                   # creation mode for log files,
#log_truncate_on_rotation = off         # If on, an existing log file with the
#log_rotation_age = 1d                  # Automatic rotation of logfiles will
#log_rotation_size = 10MB               # Automatic rotation of logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off                      # Run server silently.
#client_min_messages = notice           # values in order of decreasing detail:
#log_min_messages = warning             # values in order of decreasing detail:
#log_min_error_statement = error        # values in order of decreasing detail:
#log_min_duration_statement = -1        # -1 is disabled, 0 logs all statements
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default          # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t '                 # special values:
#log_lock_waits = off                   # log lock waits >= deadlock_timeout
#log_statement = 'none'                 # none, ddl, mod, all
#log_temp_files = -1                    # log temporary files equal or larger
#log_timezone = '(defaults to server environment setting)'
#track_activities = on
#track_counts = on
#track_functions = none                 # none, pl, all
#track_activity_query_size = 1024       # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#autovacuum = on                        # Enable autovacuum subprocess?  'on'
#log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and
#autovacuum_max_workers = 3             # max number of autovacuum subprocesses
#autovacuum_naptime = 1min              # time between autovacuum runs
#autovacuum_vacuum_threshold = 50       # min number of row updates before
#autovacuum_analyze_threshold = 50      # min number of row updates before
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum
#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for
#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for
#search_path = '"$user",public'         # schema names
#default_tablespace = ''                # a tablespace name, '' uses the default
#temp_tablespaces = ''                  # a list of tablespace names, '' uses
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0                  # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex'                   # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
#extra_float_digits = 0                 # min -15, max 3
#client_encoding = sql_ascii            # actually, defaults to database
lc_messages = 'en_US.UTF-8'                     # locale for system error message
lc_monetary = 'en_US.UTF-8'                     # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'                      # locale for number formatting
lc_time = 'en_US.UTF-8'                         # locale for time formatting
default_text_search_config = 'pg_catalog.english'
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#deadlock_timeout = 1s
#max_locks_per_transaction = 64         # min 10
#max_pred_locks_per_transaction = 64    # min 10
#array_nulls = on
#backslash_quote = safe_encoding        # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
#transform_null_equals = off
#exit_on_error = off                            # terminate session on any error?
#restart_after_crash = on                       # reinitialize after backend crash?
#custom_variable_classes = ''           # list of custom variable class names
default_statistics_target = 50 # pgtune wizard 2013-09-24
maintenance_work_mem = 960MB # pgtune wizard 2013-09-24
constraint_exclusion = on # pgtune wizard 2013-09-24
checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24
effective_cache_size = 11GB # pgtune wizard 2013-09-24
work_mem = 96MB # pgtune wizard 2013-09-24
wal_buffers = 8MB # pgtune wizard 2013-09-24
checkpoint_segments = 16 # pgtune wizard 2013-09-24
shared_buffers = 3840MB # pgtune wizard 2013-09-24
[databases]
anfitrion = host=127.0.0.1 port=5432 dbname=**** user=**** password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'

[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 127.0.0.1
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
;admin_users = user2, someadmin, otheradmin
;stats_users = stats, root
pool_mode = statement
server_reset_query = DISCARD ALL
;ignore_startup_parameters = extra_float_digits
;server_check_query = select 1
;server_check_delay = 30
; total number of clients that can connect
max_client_conn = 1000
default_pool_size = 80
;reserve_pool_size = 5
;reserve_pool_timeout = 3
;log_connections = 1
;log_disconnections = 1
;log_pooler_errors = 1
;server_round_robin = 0
;server_lifetime = 1200
;server_idle_timeout = 60
;server_connect_timeout = 15
;server_login_retry = 15
;query_timeout = 0
;query_wait_timeout = 0
;client_idle_timeout = 0
;client_login_timeout = 60
;autodb_idle_timeout = 3600
;pkt_buf = 2048
;listen_backlog = 128
;tcp_defer_accept = 0
;tcp_socket_buffer = 0
;tcp_keepalive = 1
;tcp_keepcnt = 0
;tcp_keepidle = 0
;tcp_keepintvl = 0
;dns_max_ttl = 15
;dns_zone_check_period = 0
             total       used       free     shared    buffers     cached
Mem:           15G        11G       4.1G         0B       263M        10G
-/+ buffers/cache:       1.2G        14G
Swap:          30G         0B        30G
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6186.05
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
apicid          : 2
initial apicid  : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6185.65
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
apicid          : 4
initial apicid  : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6185.66
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
FOR i IN 1..array_length(arr, 1) LOOP
arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY 
    SELECT * FROM (
        SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
        FROM gpstracking_device_tracks dt --added alias
        WHERE dt.imei = ANY(arr)
        AND dt.date_time_process >= date_trunc('hour', now()) 
        AND dt.date_time_process <= NOW()
    )
    where rnumber = 1
    SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
    FROM gpstracking_device_tracks dt --added alias
    WHERE dt.imei = ANY(arr)
    AND dt.date_time_process >= @last_poll_timestamp
    AND dt.date_time_process <= NOW()