利用ZABBIX进行服务器自动巡检并导出报表

蝴蝶泉边  (Live)

乃取一葫芦置于地,以钱覆其口,徐以杓酌油沥之,自钱孔入,而钱不湿。因曰:”我亦无他,惟手熟尔。”
《卖油翁》

实现思路

主要是利用zabbix的api来对数据进行获取处理,实现思路如下:
图片

  1. zabbix提供了丰富的api,可以根据此api获取zabbix得主机信息,监控项ID,监控项的趋势数据和历史数据

  2. 首先根据主机组ID获取组内的所有主机信息,包括主机名和IP地址

  3. 循环主机组内的主机ID,并在循环里再嵌套一个根据监控项键值获取监控项ID的请求

  4. 根据获取到的监控项ID分别获取历史数据和趋势数据

  5. 将历史数据和趋势数据的值写到一个字典里,并把循环之后的所有字典添加到列表中

  6. 将列表中的信息写入到Excel中,把脚本放到定时任务中定时执行

定义获取的时间间隔

x=(datetime.datetime.now()-datetime.timedelta(minutes=120)).strftime("%Y-%m-%d %H:%M:%S")
y=(datetime.datetime.now()).strftime("%Y-%m-%d %H:%M:%S")
def timestamp(x,y):
p=time.strptime(x,"%Y-%m-%d %H:%M:%S")
starttime = str(int(time.mktime(p)))
q=time.strptime(y,"%Y-%m-%d %H:%M:%S")
endtime= str(int(time.mktime(q)))
return starttime,endtime

根据主机组ID获取主机信息

def get_hosts(groupids,auth):
data ={
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": [ "name"],
"groupids": groupids,
"filter":{
"status": "0"
},
"selectInterfaces": [
"ip"
],
},
"auth": auth, # theauth id is what auth script returns, remeber it is string
"id": 1
}
gethost=requests.post(url=ApiUrl,headers=header,json=data)
return json.loads(gethost.content)["result"]

根据获取到的主机信息构建循环,获取主机监控项的数据

获取历史数据

host=[]
print(hosts)
for i in hosts:
item1=[]
item2=[]
#print(i)
dic1={}
for j in ['vfs.fs.size[C:,total]','vm.memory.size[total]','system.cpu.num']:
data={
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": [
"itemid"

],
"search": {
"key_": j
},
"hostids": i['hostid']
},
"auth":auth,
"id": 1
}
getitem=requests.post(url=ApiUrl,headers=header,json=data)
item=json.loads(getitem.content)['result']

hisdata={
"jsonrpc":"2.0",
"method":"history.get",
"params":{
"output":"extend",
"time_from":timestamp[0],
#"time_till":timestamp[1],
"history":0,
"sortfield": "clock",
"sortorder": "DESC",
"itemids": '%s' %(item[0]['itemid']),
"limit":1
},
"auth": auth,
"id":1
}
gethist=requests.post(url=ApiUrl,headers=header,json=hisdata)
hist=json.loads(gethist.content)['result']
item1.append(hist)

获取趋势数据

for j in ['vfs.fs.size[C:,used]','vm.memory.size[used]','system.cpu.load']:
data={
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": [
"itemid"

],
"search": {
"key_": j
},
"hostids": i['hostid']
},
"auth":auth,
"id": 1
}
getitem=requests.post(url=ApiUrl,headers=header,json=data)
item=json.loads(getitem.content)['result']

trendata={
"jsonrpc":"2.0",
"method":"trend.get",
"params":{
"output": [
"itemid",
"value_max",
"value_avg"
],
"time_from":timestamp[0],
"time_till":timestamp[1],
"itemids": '%s' %(item[0]['itemid']),
"limit":1
},
"auth": auth,
"id":1
}
gettrend=requests.post(url=ApiUrl,headers=header,json=trendata)
trend=json.loads(gettrend.content)['result']
item2.append(trend)

对获取到的数据进行处理,并导出到csv文件中

dic1['Hostname']=i['name']
dic1['IP']=i['interfaces'][0]['ip']
dic1['磁盘C:Total(B)']=round(float(item1[0][0]['value'])/1024**3,2)
dic1['磁盘最大C:Used(B)']=round(float(item2[0][0]['value_max'])/1024**3,2)
dic1['内存Total(B)']=round(float(item1[1][0]['value'])/1024**3,2)
dic1['内存最大Used(B)']=round(float(item2[1][0]['value_max'])/1024**3,2)
dic1['内存平均used(B)']=round(float(item2[1][0]['value_avg'])/1024**3,2)
dic1['CPU负载最大值']=item2[2][0]['value_max']
dic1['CPU负载平均值']=item2[2][0]['value_avg']
dic1['CPU 核数']=item1[2][0]['value']
x = time.localtime(int(item1[2][0]['clock']))
item1[2][0]['clock'] = time.strftime("%Y-%m-%d %H:%M:%S", x)
dic1['clock']=item1[2][0]['clock']
host.append(dic1)
print(item)
print(host)
return host
def writecsv(getitem1):
with open('data.csv','w',encoding='utf-8-sig') as f:
#f.write(codecs.BOM_UTF8)
writer = csv.DictWriter(f,csvheader)
writer.writeheader()
for row in getitem1:
writer.writerow(row)

实现效果如下:

图片

完整代码可以访问github地址或者阅读原文:

https://github.com/sunsharing-note/zabbix/blob/master/xunjian_auto.py

zabbix API地址:

https://www.zabbix.com/documentation/4.0/zh/manual/api/reference/history/get


欢迎各位一起交流

图片

------本页内容已结束,喜欢请分享------

© 版权声明
THE END
喜欢就支持一下吧
点赞0
分享