Fork me on GitHub

版权声明 本站原创文章 由 萌叔 发表
转载请注明 萌叔 | https://vearne.cc

1. 前言

imroc/req 作为目前最好用的request库深受Gopher的喜爱,但是它的连接池,在使用时仍然有些要注意的事项。

2. 简述

imroc/req内部仍然使用的是标准库net/http来管理连接
所以我们首先要理解net/http 是如何管理连接池的
transport.go

type Transport struct {
    idleMu     sync.Mutex
    wantIdle   bool                                // user has requested to close all idle conns
    idleConn   map[connectMethodKey][]*persistConn // most recently used at end
    idleConnCh map[connectMethodKey]chan *persistConn
    // MaxIdleConns controls the maximum number of idle (keep-alive)
    // connections across all hosts. Zero means no limit.
    MaxIdleConns int

    // MaxIdleConnsPerHost, if non-zero, controls the maximum idle
    // (keep-alive) connections to keep per-host. If zero,
    // DefaultMaxIdleConnsPerHost is used.
    MaxIdleConnsPerHost int

这里的 connectMethodKey可以用来标识一个目标
显然它是proxy,scheme, addr 构成的三元组
MaxIdleConnsPerHost限制的是相同connectMethodKey的空闲连接数量
DefaultMaxIdleConnsPerHost的默认值是2,这对一个大并发的场景是完全不够用的。

// connectMethodKey is the map key version of connectMethod, with a
// stringified proxy URL (or the empty string) instead of a pointer to
// a URL.
type connectMethodKey struct {
    proxy, scheme, addr string
}

3. 经验总结

3.1 不要用req.New()

不要用

r := req.New()

这样,每次调用New()都会创建一个新的req.Req对象
而事实上
req.Req -> http.Client -> http.Transport
Transport 维护着一个连接池

3.2 req默认并不使用http.DefaultTransport

所以无法在通过全局修改DefaultTransport来调整连接池的大小

http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = 1000

对每个目标地址而言,连接池的大小依然是2

// DefaultMaxIdleConnsPerHost is the default value of Transport's
// MaxIdleConnsPerHost.
const DefaultMaxIdleConnsPerHost = 2
// Client return the default underlying http client
func (r *Req) Client() *http.Client {
    if r.client == nil {
        r.client = newClient()
    }
    return r.client
}
// create a default client
func newClient() *http.Client {
    jar, _ := cookiejar.New(nil)
    transport := &http.Transport{
        Proxy: http.ProxyFromEnvironment,
        DialContext: (&net.Dialer{
            Timeout:   30 * time.Second,
            KeepAlive: 30 * time.Second,
            DualStack: true,
        }).DialContext,
        MaxIdleConns:          100,
        IdleConnTimeout:       90 * time.Second,
        TLSHandshakeTimeout:   10 * time.Second,
        ExpectContinueTimeout: 1 * time.Second,
    }
    return &http.Client{
        Jar:       jar,
        Transport: transport,
        Timeout:   2 * time.Minute,
    }
}

3.3 正确的用法

package main

import (
    "github.com/imroc/req"
    "net/http"
    "time"
)

func SetConnPool() {
    client := &http.Client{}
    client.Transport = &http.Transport{
        MaxIdleConnsPerHost: 500,
// 无需设置MaxIdleConns
// MaxIdleConns controls the maximum number of idle (keep-alive)
// connections across all hosts. Zero means no limit.
// MaxIdleConns 默认是0,0表示不限制
    }

    req.SetClient(client)
    req.SetTimeout(5 * time.Second)
}

func main() {
    //可以在main中调用它,使的设置在整个项目中生效
    SetConnPool()
    req.Get("http://www.baidu.com/")
}

3.4 要确保读完response的body体

如果不读完Response中的body体,那么当另一个协程再次拿到这个连接时,
它该如何处理读缓存区剩余的字节,显然这样的做法是极不负责任的,http.Transport针对这样的连接会直接关闭,不会放到连接池中。

http.Response

type Response struct {
    // Body represents the response body.
    //
    // The response body is streamed on demand as the Body field
    // is read. If the network connection fails or the server
    // terminates the response, Body.Read calls return an error.
    //
    // The http Client and Transport guarantee that Body is always
    // non-nil, even on responses without a body or responses with
    // a zero-length body. It is the caller's responsibility to
    // close Body. The default HTTP client's Transport may not
    // reuse HTTP/1.x "keep-alive" TCP connections if the Body is
    // not read to completion and closed.
    //
    // The Body is automatically dechunked if the server replied
    // with a "chunked" Transfer-Encoding.
    Body io.ReadCloser
    ... ...
}

4. 总结

本文探讨了imroc/req库使用连接池需要注意的问题。


请我喝瓶饮料

微信支付码

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据