Postico 1 5 3 – A Modern Postgresql Client Documentation
- Postico 1 5 3 – A Modern Postgresql Client Documentation System
- Postico 1 5 3 – A Modern Postgresql Client Documentation Manager
- Postico 1 5 3 – A Modern Postgresql Client Documentation Guide
- Postico 1 5 3 – A Modern Postgresql Client Documentation Example
- Postico 1 5 3 – A Modern Postgresql Client Documentation Example
Postico provides an easy to use interface, making Postgres more accessible for newcomers and specialists alike. Postico will look familiar to anyone who has used a Mac before. Just connect to a database and begin working with tables and views. Start with the basics and learn about advanced features of PostgreSQL as you go along. Postico is a modern, friendly database client Postico is great for reporting and data entry The structure editor is a popular feature among web and application developers Postico is used increasingly for data science and analytics.
por Greg Smith, Robert Treat, e Christopher Browne
PostgreSQL ships with a basic configuration tuned for wide compatibility rather than performance. Odds are good the default parameters are very undersized for your system. Rather than get dragged into the details of everything you should eventually know (which you can find if you want it at the GUC Three Hour Tour), here we're going to sprint through a simplified view of the basics, with a look at the most common things people new to PostgreSQL aren't aware of. You should click on the name of the parameter in each section to jump to the relevant documentation in the PostgreSQL manual for more details after reading the quick intro here. There is also additional information available about many of these parameters, as well as a list of parameters you shouldn't adjust, at Server Configuration Tuning.
Informações básicas sobre as configurações
PostgreSQL settings can be manipulated a number of different ways, but generally you will want to update them in your postgresql.conf file. The specific options available change from release to release, the definitive list is in the source code at src/backend/utils/misc/guc.c for your version of PostgreSQL (but the pg_settings view works well enough for most purposes).
Os tipos de configurações
There are several different types of configuration settings, divided up based on the possible inputs they take
- Boolean: true, false, on, off
- Integer: Whole numbers (2112)
- Float: Decimal values (21.12)
- Memory / Disk: Integers (2112) or 'computer units' (512MB, 2112GB). Avoid integers--you need to know the underlying unit to figure out what they mean. Computer units are only available starting in version 8.2.
- Time: 'Time units' aka d,m,s (30s). Sometimes the unit is left out; don't do that
- Strings: Single quoted text ('pg_log')
- ENUMs: Strings, but from a specific list ('WARNING', 'ERROR')
- Lists: A comma separated list of strings ('$user',public,tsearch2)
Quando elas tem efeito
PostgreSQL settings have different levels of flexibility for when they can be changed, usually related to internal code restrictions. The complete list of levels is:
- Postmaster: requires restart of server
- Sighup: requires a HUP of the server, either by kill -HUP (usually -1), pg_ctl reload, or select pg_reload_conf();
- User: can be set within individual sessions, take effect only within that session
- Internal: set at compile time, can't be changed, mainly for reference
- Backend: settings which must be set before session start
- Superuser: can be set at runtime for the server by superusers
Most of the time you'll only use the first of these, but the second can be useful if you have a server you don't want to take down, while the user session settings can be helpful for some special situations. You can tell which type of parameter a setting is by looking at the 'context' field in the pg_settings view.
Notas importantes sobre o postgresql.conf
- You should be able to find it at $PGDATA/postgresql.conf; watch out for symbolic links and other trickiness
- You can figure out the file location with SHOW config_file
- Lines with # are comments and have no effect. For a new database, this will mean the setting is using the default, but on running systems this may not hold true! In versions before 8.3, commenting out a setting does not restore it to the default. Even in versions after that, changes to the postgresql.conf do not take effect without a reload/restart, so it's possible for the system to be running something different than what is in the file.
- If the same setting is listed multiple times, the last one wins
Visualizando as configurações atuais
- Look in postgresql.conf. This works if you follow good practice, but it's not definitive!
- show all, show <setting> will show you the current value of the setting. Watch out for session specific changes
- select * from pg_settings will label session specific changes as locally modified
listen_addresses
Postico 1 5 3 – A Modern Postgresql Client Documentation System
By default, PostgreSQL only responds to connections from the local host. If you want your server to be accessible from other systems via standard TCP/IP networking, you need to change listen_addresses from its default. The usual approach is to set it to listen to all addresses like this:
And then control who can and can connect via the pg_hba.conf file.
max_connections
max_connections sets exactly that: the maximum number of client connections allowed. This is very important to some of the below parameters (particularly work_mem) because there are some memory resources that are or can be allocated on a per-client basis, so the maximum number of clients suggests the maximum possible memory use. Generally, PostgreSQL on good hardware can support a few hundred connections. If you want to have thousands instead, you should consider using connection pooling software to reduce the connection overhead.
shared_buffers
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. One reason the defaults are low is because on some platforms (like older Solaris versions and SGI), having large values requires invasive action like recompiling the kernel. Even on a modern Linux system, the stock kernel will likely not allow setting shared_buffers to over 32MB without adjusting kernel settings first.
If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system. If you have less RAM you'll have to account more carefully for how much RAM the OS is taking up; closer to 15% is more typical there. There are some workloads where even larger settings for shared_buffers are effective, but given the way PostgreSQL also relies on the operating system cache, it's unlikely you'll find using more than 40% of RAM to work better than a smaller amount.
Be aware that if your system or PostgreSQL build is 32-bit, it might not be practical to set shared_buffers above 2 ~ 2.5GB. See this blog post for details.
Note that on Windows (and on PostgreSQL versions before 8.1), large values for shared_buffers aren't as effective, and you may find better results keeping it relatively low and using the OS cache more instead. On Windows the useful range is 64MB to 512MB, and for earlier than 8.1 versions the effective upper limit is near shared_buffers=50000 (just under 400MB--older versions before 8.2 don't allow using MB values for their settings, you specify this parameter in 8K blocks)
It's likely you will have to increase the amount of memory your operating system allows you to allocate at once to set the value for shared_buffers this high. On UNIX-like systems, if you set it above what's supported, you'll get a message like this:
See Managing Kernel Resources for details on how to correct this.
Changing this setting requires restarting the database. Also, this is a hard allocation of memory; the whole thing gets allocated out of virtual memory when the database starts.
Postico 1 5 3 – A Modern Postgresql Client Documentation Manager
effective_cache_size
effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what's used by the OS itself and other applications. This is a guideline for how much memory you expect to be available in the OS and PostgreSQL buffer caches, not an allocation! This value is used only by the PostgreSQL query planner to figure out whether plans it's considering would be expected to fit in RAM or not. If it's set too low, indexes may not be used for executing queries the way you'd expect. The setting for shared_buffers is not taken into account here--only the effective_cache_size value is, so it should include memory dedicated to the database too.
Setting effective_cache_size to 1/2 of total memory would be a normal conservative setting, and 3/4 of memory is a more aggressive but still reasonable amount. You might find a better estimate by looking at your operating system's statistics. On UNIX-like systems, add the free+cached numbers from free or top to get an estimate. On Windows see the 'System Cache' size in the Windows Task Manager's Performance tab. Changing this setting does not require restarting the database (HUP is enough).
checkpoint_segments checkpoint_completion_target
PostgreSQL writes new transactions to the database in files called WAL segments that are 16MB in size. Every time checkpoint_segments worth of these files have been written, by default 3, a checkpoint occurs. Checkpoints can be resource intensive, and on a modern system doing one every 48MB will be a serious performance bottleneck. Setting checkpoint_segments to a much larger value improves that. Unless you're running on a very small configuration, you'll almost certainly be better setting this to at least 10, which also allows usefully increasing the completion target.
For more write-heavy systems, values from 32 (checkpoint every 512MB) to 256 (every 4GB) are popular nowadays. Very large settings use a lot more disk and will cause your database to take longer to recover, so make sure you're comfortable with both those things before large increases. Normally the large settings (>64/1GB) are only used for bulk loading. Note that whatever you choose for the segments, you'll still get a checkpoint at least every 5 minutes unless you also increase checkpoint_timeout (which isn't necessary on most systems).
- PostgreSQL 8.3 and newer
Starting with PostgreSQL 8.3, the checkpoint writes are spread out a bit while the system starts working toward the next checkpoint. You can spread those writes out further, lowering the average write overhead, by increasing the checkpoint_completion_target parameter to its useful maximum of 0.9 (aim to finish by the time 90% of the next checkpoint is here) rather than the default of 0.5 (aim to finish when the next one is 50% done). A setting of 0 gives something similar to the behavior of the earlier versions. The main reason the default isn't just 0.9 is that you need a larger checkpoint_segments value than the default for broader spreading to work well. For lots more information on checkpoint tuning, see Checkpoints and the Background Writer (where you'll also learn why tuning the background writer parameters, particularly those in 8.2 and below, is challenging to do usefully).
The autovacuum process takes care of several maintenance chores inside your database that you really need. Generally, if you think you need to turn regular vacuuming off because it's taking too much time or resources, that means you're doing it wrong. The answer to almost all vacuuming problems is to vacuum more often, not less, so that each individual vacuum operation has less to clean up.
However, it's acceptable to disable autovacuum for short periods of time, for instance when bulk loading large amounts of data.
- PostgreSQL 8.4 and newer
The FSM was rewritten for PostgreSQL 8.4, so earlier advice is no longer applicable. The max_fsm_pages and max_fsm_relations settings are gone, as the new FSM is self-adapting (more info). autovacuum is enabled by default and should remain so, as vacuum much less invasive in 8.4 than before thanks to visibility maps.
- PostgreSQL 8.3 and earlier
As of 8.3, autovacuum is turned on by default, and you should keep it that way. In 8.1 and 8.2 you will have to turn it on yourself. Note that in those earlier versions, you may need to tweak its settings a bit to make it aggressive enough; it may not do enough work by default if you have a larger database or do lots of updates.
You may also need to increase the value of max_fsm_pages and max_fsm_relations as needed. The Free Space Map is used to track where there are dead tuples (rows) that may be reclaimed. You will only get effective nonblocking VACUUM queries if the dead tuples can be listed in the Free Space Map. As a result, if you do not plan to run VACUUM frequently, and if you expect a lot of updates, you should ensure these values are usefully large (and remember, these values are cluster wide, not database wide). It should be easy enough to set max_fsm_relations high enough; the problem that will more typically occur is when max_fsm_pages is not set high enough. Once the Free Space Map is full, VACUUM will be unable to track further dead pages. In a busy database, this needs to be set much higher than 1000... also, remember that changing these settings requires a restart of the database, so it is wise to to err on the side of setting comfortable margins for these settings.
If you run VACUUM VERBOSE on your database, it'll tell you how many pages and relations are in use (and, under 8.3, what the current limits are). For example,
If you find that your settings are already too low, you will likely need to do aggressive vacuuming of your system, and possibly reindexing and vacuum full maybe needed as well. If you're getting close to the limits for page slots, typical practice is to just double the current values, with perhaps a smaller percentage increase once you've gotten much higher (in the millions range). For the max relations settings, note that this setting includes all the databases in your cluster.
One other situation to be aware of is that of a database approaching autovacuum_freeze_max_age. When a database approaches this point, it will begin to vacuum every table in the database that has not been vacuumed before. On some systems this may not result in much activity, but for systems where there are a lot of tables that are not modified often, this can be a more common occurrence (especially if the system has gone through a dump/restore, say for upgrading). The significance of all of this is that, even on a system with well set fsm settings, once your system begins vacuuming all of the additional tables, your old fsm setting may no longer be appropriate.
logging
There are many things you can log that may or may not be important to you. You should investigate the documentation on all of the options, but here are some tips & tricks to get you started:
- pgFouine is a tool used to analyze postgresql logs for performance tuning. If you plan to use this tool, it has specific logging requirements. Please check http://pgfouine.projects.postgresql.org/
- a newer alternative to pgFouine is pgbadger: http://dalibo.github.com/pgbadger/
- log_destination & log_directory (& log_filename): What you set these options to is not as important as knowing they can give you hints to determine where your database server is logging to. Best practice would be to try and make this as similar as possible across your servers. Note that in some cases, the init script starting your database may be customizing the log destination in the command line used to start the database, overriding what's in the postgresql.conf (and making it so you'll get different behavior if you run pg_ctl manually instead of using the init script).
- log_min_error_statement: You should probably make sure this is at least on error, so that you will see any SQL commands which cause an error. should be the default on recent versions.
- log_min_duration_statement: Not necessary for everyday use, but this can generate logs of 'slow queries' on your system.
- log_line_prefix: Appends information to the start of each line. A good generic recommendation is '%t:%r:%u@%d:[%p]: ' : %t=timestamp, %u=db user name, %r=host connecting from, %d=database connecting to, %p=PID of connection. It may not be obvious what the PID is useful at first, but it can be vital for trying to troubleshoot problems in the future so better to put in the logs from the start.
- log_statement: Choices of none, ddl, mod, all. Using all in production leads to severe performance penalties. DDL can sometime be helpful to discover rogue changes made outside of your recommend processes, by 'cowboy DBAs' for example.
default_statistics_target
The database software collects statistics about each of the tables in your database to decide how to execute queries against it. In earlier versions of PostgreSQL, the default setting of 10 doesn't collect very much information, and if you're not getting good execution query plans particularly on larger (or more varied) tables you should increase default_statistics_target then ANALYZE the database again (or wait for autovacuum to do it for you).
- PostgreSQL 8.4 and later
The starting default_statistics_target value was raised from 10 to 100 in PostgreSQL 8.4. Increases beyond 100 may still be useful, but this increase makes for greatly improved statistics estimation in the default configuration. The maximum value for the parameter was also increased from 1000 to 10,000 in 8.4.
work_mem maintainance_work_mem
If you do a lot of complex sorts, and have a lot of memory, then increasing the work_mem parameter allows PostgreSQL to do larger in-memory sorts which, unsurprisingly, will be faster than disk-based equivalents.
This size is applied to each and every sort done by each user, and complex queries can use multiple working memory sort buffers. Set it to 50MB, and have 30 users submitting queries, and you are soon using 1.5GB of real memory. Furthermore, if a query involves doing merge sorts of 8 tables, that requires 8 times work_mem. You need to consider what you set max_connections to in order to size this parameter correctly. This is a setting where data warehouse systems, where users are submitting very large queries, can readily make use of many gigabytes of memory.
maintenance_work_mem is used for operations like vacuum. Using extremely large values here doesn't help very much, and because you essentially need to reserve that memory for when vacuum kicks in, takes it away from more useful purposes. Something in the 256MB range has anecdotally been a reasonably large setting here.
- PostgreSQL 8.3 and later
In 8.3 you can use log_temp_files to figure out if sorts are using disk instead of fitting in memory. In earlier versions, you might instead just monitor the size of them by looking at how much space is being used in the various $PGDATA/base/<db oid>/pgsql_tmp files. You can see sorts to disk happen in EXPLAIN ANALYZE plans as well. For example, if you see a line like 'Sort Method: external merge Disk: 7526kB' in there, you'd know a work_mem of at least 8MB would really improve how fast that query executed, by sorting in RAM instead of swapping to disk.
wal_sync_method wal_buffers
After every transaction, PostgreSQL forces a commit to disk out to its write-ahead log. This can be done a couple of ways, and on some platforms the other options are considerably faster than the conservative default. open_sync is the most common non-default setting switched to, on platforms that support it but default to one of the fsync methods. See Tuning PostgreSQL WAL Synchronization for a lot of background on this topic. Note that open_sync writing is buggy on some platforms (such as Linux), and you should (as always) do plenty of tests under a heavy write load to make sure that you haven't made your system less stable with this change. Reliable Writes contains more information on this topic.
Linux kernels starting with version 2.6.33 will cause earlier versions of PostgreSQL to default to wal_sync_method=open_datasync; before that kernel release the default picked was always fdatasync. This can cause a significant performance decrease when combined with small writes and/or small values for wal_buffers.
Increasing wal_buffers from its tiny default of a small number of kilobytes is helpful for write-heavy systems. Benchmarking generally suggests that just increasing to 1MB is enough for some large systems, and given the amount of RAM in modern servers allocating a full WAL segment (16MB, the useful upper-limit here) is reasonable. Changing wal_buffers requires a database restart.
- PostgreSQL 9.1 and later
Starting with PostgreSQL 9.1 wal_buffers defaults to being 1/32 of the size of shared_buffers, with an upper limit of 16MB (reached when shared_buffers=512MB).
PostgreSQL 9.1 also changes the logic for selecting the default wal_sync_method such that on newer Linux kernels, it will still select fdatasync as its method--the same as on older Linux versions.
constraint_exclusion
- PostgreSQL 8.4 and later
In 8.4, constraint_exclusion now defaults to a new choice: partition. This will only enable constraint exclusion for partitioned tables which is the right thing to do in nearly all cases.
- PostgreSQL 8.3 and earlier
If you plan to use table partitioning, you need to turn on constraint exclusion. Since it does add overhead to query planning, it is recommended you leave this off outside of this scenario.
max_prepared_transactions
This setting is used for managing 2 phase commit. If you do not use two phase commit (and if you don't know what it is, you don't use it), then you can set this value to 0. That will save a little bit of shared memory. For database systems with a large number (at least hundreds) of concurrent connections, be aware that this setting also affects the number of available lock-slots in pg_locks, so you may want to leave it at the default setting. There is a formula for how much memory gets allocated in the docs and in the default postgresql.conf.
Changing max_prepared_transactions requires a server restart.
synchronous_commit
Postico 1 5 3 – A Modern Postgresql Client Documentation Guide
PostgreSQL can only safely use a write cache if it has a battery backup. See WAL reliability for an essential introduction to this topic. No, really; go read that right now, it's vital to understand that if you want your database to work right.
You may be limited to approximately 100 transaction commits per second per client in situations where you don't have such a durable write cache (and perhaps only 500/second even with lots of clients).
- PostgreSQL 8.3 and later
Asynchronous commit was introduced in PostgreSQL 8.3. For situations where a small amount of data loss is acceptable in return for a large boost in how many updates you can do to the database per second, consider switching synchronous commit off. This is particularly useful in the situation where you do not have a battery-backed write cache on your disk controller, because you could potentially get thousands of commits per second instead of just a few hundred.
For earlier versions of PostgreSQL, you may find people recommending that you set fsync=off to speed up writes on busy systems. This is dangerous--a power loss could result in your database getting corrupted and not able to start again. Synchronous commit doesn't introduce the risk of corruption, which is really bad, just some risk of data loss.
random_page_cost
This setting suggests to the optimizer how long it will take your disks to seek to a random disk page, as a multiple of how long a sequential read (with a cost of 1.0) takes. If you have particularly fast disks, as commonly found with RAID arrays of SCSI disks, it may be appropriate to lower random_page_cost, which will encourage the query optimizer to use random access index scans. Some feel that 4.0 is always too large on current hardware; it's not unusual for administrators to standardize on always setting this between 2.0 and 3.0 instead. In some cases that behavior is a holdover from earlier PostgreSQL versions where having random_page_cost too high was more likely to screw up plan optimization than it is now (and setting at or below 2.0 was regularlly necessary). Since these cost estimates are just that--estimates--it shouldn't hurt to try lower values.
Postico 1 5 3 – A Modern Postgresql Client Documentation Example
But this not where you should start to search for plan problems. Note that random_page_cost is pretty far down this list (at the end in fact). If you are getting bad plans, this shouldn't be the first thing you look at, even though lowering this value may be effective. Instead, you should start by making sure autovacuum is working properly, that you are collecting enough statistics, and that you have correctly sized the memory parameters for your server--all the things gone over above. After you've done all those much more important things, if you're still getting bad plans then you should see if lowering random_page_cost is still useful.