gin的timeout middleware实现(续)
版权声明 本站原创文章 由 萌叔 发表
转载请注明 萌叔 | https://vearne.cc
1. 前言
在笔者的上一篇文章中,我们探讨了如何开发一个对业务无侵入的timeout middleware的实现,但是遗留了问题。在超时发生时,后台运行的子协程可能会不断累积,造成协程的泄露,最终引发程序奔溃。
2. 解决
为了解决子协程退出的问题,我们需要在超时发生时,通知子协程,让其也尽快退出。
下面的例子中,gin的处理函数long(c *gin.Context)中有对gRPC服务的调用
我们使用grpc/grpc-go 中提供的
greeter_server来提供gRPC服务
// Package main implements a server for Greeter service.
package main
import (
"context"
"log"
"net"
"time"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
const (
port = ":50051"
)
// server is used to implement helloworld.GreeterServer.
type server struct{}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.Name)
time.Sleep(2*time.Second)
return &pb.HelloReply{Message: "Hello " + in.Name}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
gin相关代码
main.go
package main
import (
"bytes"
"context"
"github.com/gin-gonic/gin"
"github.com/vearne/golib/buffpool"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"log"
"net/http"
"time"
)
const (
address = "localhost:50051"
defaultName = "world"
)
type SimplebodyWriter struct {
gin.ResponseWriter
body *bytes.Buffer
}
func (w SimplebodyWriter) Write(b []byte) (int, error) {
return w.body.Write(b)
}
func Timeout(t time.Duration) gin.HandlerFunc {
return func(c *gin.Context) {
// sync.Pool
buffer := buffpool.GetBuff()
blw := &SimplebodyWriter{body: buffer, ResponseWriter: c.Writer}
c.Writer = blw
// wrap the request context with a timeout
ctx, cancel := context.WithTimeout(c.Request.Context(), t)
c.Request = c.Request.WithContext(ctx)
finish := make(chan struct{})
// 子协程
go func() {
c.Next()
finish <- struct{}{}
}()
select {
case <-ctx.Done():
c.Writer.WriteHeader(http.StatusGatewayTimeout)
c.Abort()
// 超时发生, 通知子协程退出
cancel()
// 如果超时的话,buffer无法主动清除,只能等待GC回收
case <-finish:
// 结果只会在主协程中被写入
blw.ResponseWriter.Write(buffer.Bytes())
buffpool.PutBuff(buffer)
}
}
}
func short(c *gin.Context) {
time.Sleep(1 * time.Second)
c.JSON(http.StatusOK, gin.H{"hello": "world"})
}
func long(c *gin.Context) {
// RPC 调用
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
greeter := pb.NewGreeterClient(conn)
name := defaultName
ctx := c.Request.Context()
r, err := greeter.SayHello(ctx, &pb.HelloRequest{Name: name})
if err != nil {
log.Printf("could not greet: %v\n", err)
return
}
log.Printf("Greeting: %s", r.Message)
c.JSON(http.StatusOK, gin.H{"hello": "world"})
}
func main() {
// create new gin without any middleware
engine := gin.New()
// add timeout middleware with 2 second duration
engine.Use(Timeout(time.Second * 1))
// create a handler that will last 1 seconds
engine.GET("/short", short)
// create a route that will last 5 seconds
engine.GET("/long", long)
// run the server
log.Fatal(engine.Run(":8080"))
}
简单整理一下
╰─$ curl -v http://localhost:8080/long
* Trying ::1...
* Connected to localhost (::1) port 8080 (#0)
> GET /long HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.46.0
> Accept: */*
>
< HTTP/1.1 504 Gateway Timeout
< Date: Thu, 16 May 2019 02:48:03 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
此时能够看到, gin server有超时异常提示,说明子协程确实退出了
[GIN-debug] GET /short --> main.short (2 handlers)
[GIN-debug] GET /long --> main.long (2 handlers)
[GIN-debug] Listening and serving HTTP on :8080
2019/05/20 23:16:45 could not greet: rpc error: code = DeadlineExceeded desc = context deadline exceeded
3. 后记
重要
此程序有bug,请阅读
GIN的TIMEOUT MIDDLEWARE实现(续2)
case <-finish:
// 结果只会在主协程中被写入
blw.ResponseWriter.Write(buffer.Bytes())
buffpool.PutBuff(buffer)
为什么需要执行 buffpool.PutBuff 这步呢?
利用 sync.Pool,可以重复利用bytes.Buffer对象,减少创建对象和GC的开销
超时放在中间件里面有数据竞争问题
参看我的项目吧 vearne/gin-timeout
目前在线上运行良好,有问题可以提issue