2016-05-30 8 views
-3

Hie、誰か​​がHAPRoxyの利用可能なアルゴリズムをそれぞれ簡単な定義でリストアップしてください。マニュアルからいくつかのアルゴリズムがHAProxyで使用されています

よろしく

+0

ロードバランシングアルゴリズムを意味しますか? –

+0

はいHAproxyで使用されるロードバランシングアルゴリズム – Jose

答えて

2

<algorithm> is the algorithm used to select a server when doing load 
      balancing. This only applies when no persistence information 
      is available, or when a connection is redispatched to another 
      server. <algorithm> may be one of the following : 

    roundrobin Each server is used in turns, according to their weights. 
       This is the smoothest and fairest algorithm when the server's 
       processing time remains equally distributed. This algorithm 
       is dynamic, which means that server weights may be adjusted 
       on the fly for slow starts for instance. It is limited by 
       design to 4095 active servers per backend. Note that in some 
       large farms, when a server becomes up after having been down 
       for a very short time, it may sometimes take a few hundreds 
       requests for it to be re-integrated into the farm and start 
       receiving traffic. This is normal, though very rare. It is 
       indicated here in case you would have the chance to observe 
       it, so that you don't worry. 

    static-rr Each server is used in turns, according to their weights. 
       This algorithm is as similar to roundrobin except that it is 
       static, which means that changing a server's weight on the 
       fly will have no effect. On the other hand, it has no design 
       limitation on the number of servers, and when a server goes 
       up, it is always immediately reintroduced into the farm, once 
       the full map is recomputed. It also uses slightly less CPU to 
       run (around -1%). 

    leastconn The server with the lowest number of connections receives the 
       connection. Round-robin is performed within groups of servers 
       of the same load to ensure that all servers will be used. Use 
       of this algorithm is recommended where very long sessions are 
       expected, such as LDAP, SQL, TSE, etc... but is not very well 
       suited for protocols using short sessions such as HTTP. This 
       algorithm is dynamic, which means that server weights may be 
       adjusted on the fly for slow starts for instance. 

    source  The source IP address is hashed and divided by the total 
       weight of the running servers to designate which server will 
       receive the request. This ensures that the same client IP 
       address will always reach the same server as long as no 
       server goes down or up. If the hash result changes due to the 
       number of running servers changing, many clients will be 
       directed to a different server. This algorithm is generally 
       used in TCP mode where no cookie may be inserted. It may also 
       be used on the Internet to provide a best-effort stickiness 
       to clients which refuse session cookies. This algorithm is 
       static by default, which means that changing a server's 
       weight on the fly will have no effect, but this can be 
       changed using "hash-type". 

    uri   This algorithm hashes either the left part of the URI (before 
       the question mark) or the whole URI (if the "whole" parameter 
       is present) and divides the hash value by the total weight of 
       the running servers. The result designates which server will 
       receive the request. This ensures that the same URI will 
       always be directed to the same server as long as no server 
       goes up or down. This is used with proxy caches and 
       anti-virus proxies in order to maximize the cache hit rate. 
       Note that this algorithm may only be used in an HTTP backend. 
       This algorithm is static by default, which means that 
       changing a server's weight on the fly will have no effect, 
       but this can be changed using "hash-type". 

       This algorithm supports two optional parameters "len" and 
       "depth", both followed by a positive integer number. These 
       options may be helpful when it is needed to balance servers 
       based on the beginning of the URI only. The "len" parameter 
       indicates that the algorithm should only consider that many 
       characters at the beginning of the URI to compute the hash. 
       Note that having "len" set to 1 rarely makes sense since most 
       URIs start with a leading "/". 

       The "depth" parameter indicates the maximum directory depth 
       to be used to compute the hash. One level is counted for each 
       slash in the request. If both parameters are specified, the 
       evaluation stops when either is reached. 

    url_param The URL parameter specified in argument will be looked up in 
       the query string of each HTTP GET request. 

       If the modifier "check_post" is used, then an HTTP POST 
       request entity will be searched for the parameter argument, 
       when it is not found in a query string after a question mark 
       ('?') in the URL. Optionally, specify a number of octets to 
       wait for before attempting to search the message body. If the 
       entity can not be searched, then round robin is used for each 
       request. For instance, if your clients always send the LB 
       parameter in the first 128 bytes, then specify that. The 
       default is 48. The entity data will not be scanned until the 
       required number of octets have arrived at the gateway, this 
       is the minimum of: (default/max_wait, Content-Length or first 
       chunk length). If Content-Length is missing or zero, it does 
       not need to wait for more data than the client promised to 
       send. When Content-Length is present and larger than 
       <max_wait>, then waiting is limited to <max_wait> and it is 
       assumed that this will be enough data to search for the 
       presence of the parameter. In the unlikely event that 
       Transfer-Encoding: chunked is used, only the first chunk is 
       scanned. Parameter values separated by a chunk boundary, may 
       be randomly balanced if at all. 

       If the parameter is found followed by an equal sign ('=') and 
       a value, then the value is hashed and divided by the total 
       weight of the running servers. The result designates which 
       server will receive the request. 

       This is used to track user identifiers in requests and ensure 
       that a same user ID will always be sent to the same server as 
       long as no server goes up or down. If no value is found or if 
       the parameter is not found, then a round robin algorithm is 
       applied. Note that this algorithm may only be used in an HTTP 
       backend. This algorithm is static by default, which means 
       that changing a server's weight on the fly will have no 
       effect, but this can be changed using "hash-type". 

    hdr(<name>) The HTTP header <name> will be looked up in each HTTP request. 
       Just as with the equivalent ACL 'hdr()' function, the header 
       name in parenthesis is not case sensitive. If the header is 
       absent or if it does not contain any value, the roundrobin 
       algorithm is applied instead. 

       An optional 'use_domain_only' parameter is available, for 
       reducing the hash algorithm to the main domain part with some 
       specific headers such as 'Host'. For instance, in the Host 
       value "haproxy.1wt.eu", only "1wt" will be considered. 

       This algorithm is static by default, which means that 
       changing a server's weight on the fly will have no effect, 
       but this can be changed using "hash-type". 

    rdp-cookie 
    rdp-cookie(name) 
       The RDP cookie <name> (or "mstshash" if omitted) will be 
       looked up and hashed for each incoming TCP request. Just as 
       with the equivalent ACL 'req_rdp_cookie()' function, the name 
       is not case-sensitive. This mechanism is useful as a degraded 
       persistence mode, as it makes it possible to always send the 
       same user (or the same session ID) to the same server. If the 
       cookie is not found, the normal roundrobin algorithm is 
       used instead. 

       Note that for this to work, the frontend must ensure that an 
       RDP cookie is already present in the request buffer. For this 
       you must use 'tcp-request content accept' rule combined with 
       a 'req_rdp_cookie_cnt' ACL. 

       This algorithm is static by default, which means that 
       changing a server's weight on the fly will have no effect, 
       but this can be changed using "hash-type". 

<arguments> is an optional list of arguments which may be needed by some 
      algorithms. Right now, only "url_param" and "uri" support an 
      optional argument. 

      balance uri [len <len>] [depth <depth>] 
      balance url_param <param> [check_post [<max_wait>]] 

は、バックエンドのロードバランシングアルゴリズムは、他の アルゴリズム、モードやオプションが設定されていない時にラウンドロビン方式に設定されています。アルゴリズムは各バックエンドに対して一度だけ設定することができます。

+0

コードインデントを削除し、代わりに箇条書きを使用します。 – ZF007

関連する問題