packetbeat 初探
版权声明 本站原创文章 由 萌叔 发表
转载请注明 萌叔 | https://vearne.cc
1. 前言
packetbeat是elastic公司开发的网络抓包、嗅探以及分析工具。
和tcpdump一样,它的底层依赖libpcap。但它比tcpdump、tcpcopy功能强大的多。
它能够直接解析以下的网络协议
- ICMP (v4 and v6)
- DHCP (v4)
- DNS
- HTTP
- AMQP 0.9.1
- Cassandra
- Mysql
- PostgreSQL
- Redis
- Thrift-RPC
- MongoDB
- Memcache
- TLS
将网络包转换成JSON字符串,然后导出到以下output
- File
- Console
- Elasticsearch
- Logstash
- Kafka
- Redis
简单描述过程
event -> filter1 -> filter2 ... -> output
让我非常吃惊的是它能够捕获MySQL、Redis等的二进制通讯协议,能够从捕获的记录中,清晰的看到每一条SQL查询语句,以及每一条Redis命令
2. 我们能拿它做什么?
据笔者的了解。
以前做线上的流量复制和重放,大致有这么几种方法
(1) 使用tcpcopy
(2) 在服务中引起流量复制模块
(3) 服务打印特殊格式的日志
供后期解析,并做重放
(1)是二进制数据流,人无法阅读,(2)、(3)对服务有入侵,不够友好。
JSON格式的数据,对程序优化,对人来说阅读的障碍也不大。个人认为是个不错的选择
有了这些捕获的数据,我们可以用来做
- 线上排障
- 服务功能测试/压力测试
- 对请求(HTTP请求,MySQL、Redis请求等)进行统计分析,为服务优化提供必要的数据支持。
3. 安装&配置&使用
3.1 安装
见参考资料4
3.2 配置
以下配置用于捕获线上的HTTP服务的请求,并转发到Kafka
更完整的配置资料见见参考资料2
packetbeat.interfaces.device: any # 捕获所有网卡的网络请求
packetbeat.protocols:
- type: http # 处理HTTP请求
ports: [8080] # 被捕获服务的端口
send_headers: ["User-Agent", "Authorization"] # header中只捕获列出的字段
real_ip_header: "X-Forwarded-For"
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["192.168.100.201:9092"]
version: "0.10.2.1" # Kafka的对应版本
# message topic selection + partitioning
topic: 'downgrade'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
3.3 使用
见官方文档
3.4 注意的要点
1) packetbeat的配置文件的访问权限,要求被设置为
0644 (-rw-r--r--),否则你可能会收到如下错误提示
Exiting: error loading config file: config file ("{beatname}.yml") can only be
writable by the owner but the permissions are "-rw-rw-r--" (to fix the
permissions use: 'chmod go-w /etc/{beatname}/{beatname}.yml')
2) 如果你的output
指定为Kafka,请务必指定kafka的版本,否则你可能收到如下错误。
2018-12-16T23:27:18.995+0800 INFO kafka/log.go:53 producer/broker/0 starting up
2018-12-16T23:27:18.995+0800 INFO kafka/log.go:53 producer/broker/0 state change to [open] on downgrade/0
2018-12-16T23:27:18.995+0800 INFO kafka/log.go:53 producer/leader/downgrade/0 selected broker 0
2018-12-16T23:27:18.995+0800 INFO kafka/log.go:53 producer/leader/downgrade/0 state change to [flushing-3]
2018-12-16T23:27:18.995+0800 INFO kafka/log.go:53 producer/leader/downgrade/0 state change to [normal]
2018-12-16T23:27:18.996+0800 INFO kafka/log.go:53 Connected to broker at 192.168.100.201:9092 (registered as #0)
2018-12-16T23:27:18.996+0800 INFO kafka/log.go:53 producer/broker/0 state change to [closing] because EOF
2018-12-16T23:27:18.996+0800 INFO kafka/log.go:53 Closed connection to broker 192.168.100.201:9092
最后让我们来看packetbeat捕获的HTTP请求
{
"@timestamp": "2018-12-15T23:19:05.979Z",
"@metadata": {
"beat": "packetbeat",
"type": "doc",
"version": "6.5.1",
"topic": "downgrade"
},
"server": "",
"client_ip": "192.168.0.3",
"responsetime": 4,
"method": "GET",
"query": "GET /api/time", # 请求
"client_server": "",
"real_ip": "123.125.115.110",
"bytes_out": 218,
"beat": {
"name": "0377cc41911aa0",
"hostname": "0377cc41911aa0",
"version": "6.5.1"
},
"status": "OK",
"path": "/api/time",
"proc": "",
"client_port": 48249,
"client_proc": "",
"port": 8080,
"ip": "192.168.0.13",
"type": "http",
"direction": "in",
"bytes_in": 479,
"host": {
"name": "0377cc419aa0"
},
"http": {
"request": {
"params": "t=1664550700", # 请求参数
"headers": {
"authorization": "8FCAF985B1E29F0D",
"content-length": 0,
"content-type": "application/json",
"user-agent": "libcurl-android-agent/1.0"
}
},
"response": {
"code": 200,
"phrase": "OK",
"headers": {
"content-length": 95,
"content-type": "application/json; charset=utf-8"
}
}
}
}
注释 packetbeat是可以捕获HTTP请求的payload,详情见参考资料2