博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
JVM参数调优
阅读量:5904 次
发布时间:2019-06-19

本文共 9341 字,大约阅读时间需要 31 分钟。

hot3.png

看zabbix报错日志的时候,发现集群环境的很多服务的堆内存有异常

具体表现如下: 某个时间某机器出现大量的RPC调用超时,持续大致一分钟左右,偶尔还会出现机器的拉起脚本判定服务失联,kill掉之后重新启动。

这些现象都伴随着堆内存占用非常高,且查看gc日志发现此时进行了fullGC,时间长达56S的fullGC!

在没进行fullGC的前二十四个小时内,堆内存占用是稳步缓缓上升的,没有明显的下降趋势

初次怀疑内存泄露或者是垃圾回收导致的

jcmd pid GC.class_histgram

发现没有什么异常对象

1:        127986       16991200  [C   2:         13598        9982736  [B   3:        124786        2994864  java.lang.String   4:         34740        2768024  [Ljava.lang.Object;   5:         83259        2664288  java.util.HashMap$Node   6:         60181        2407240  java.util.TreeMap$Entry   7:          9677        2359504  [I   8:         15881        1786888  java.lang.Class   9:         13293        1169784  java.lang.reflect.Method  10:         36411        1165152  java.util.concurrent.ConcurrentHashMap$Node

且进行回收的时候byte能直接回收900M的内存,这个集群服务大致的用途是接受http请求然后调用大量的RPC接口。 RPC调用的时候对象进行序列化和接受数据反序列化,所以产生了大量的字节对象,可以理解。

我们拿了一台A机器由CMS垃圾回收换到了G1,原CMS机制的机器我们称之为B机器。

  1. A机器G1
  2. B机器CMS

更换之后发现运行非常良好,再也不会出现堆内存缓缓上升的情况。 马上得出G1是比CMS强的垃圾回收器? 这不应该,应该是哪里CMS哪里配置出问题了,仔细看了下g1的gc日志

回收情况如下所示:

2018-12-20T20:59:19.983+0800: 92405.779: [GC pause (G1 Evacuation Pause) (young), 0.1240010 secs]   [Parallel Time: 5.6 ms, GC Workers: 4]      [GC Worker Start (ms): Min: 92405778.9, Avg: 92405779.0, Max: 92405779.0, Diff: 0.1]      [Ext Root Scanning (ms): Min: 2.1, Avg: 2.2, Max: 2.5, Diff: 0.4, Sum: 8.9]      [Update RS (ms): Min: 0.4, Avg: 0.5, Max: 0.6, Diff: 0.2, Sum: 2.1]         [Processed Buffers: Min: 11, Avg: 22.2, Max: 45, Diff: 34, Sum: 89]      [Scan RS (ms): Min: 0.3, Avg: 0.3, Max: 0.3, Diff: 0.0, Sum: 1.2]      [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]      [Object Copy (ms): Min: 2.2, Avg: 2.3, Max: 2.4, Diff: 0.2, Sum: 9.2]      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]         [Termination Attempts: Min: 1, Avg: 4.8, Max: 7, Diff: 6, Sum: 19]      [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.0, Sum: 0.2]      [GC Worker Total (ms): Min: 5.4, Avg: 5.4, Max: 5.5, Diff: 0.1, Sum: 21.7]      [GC Worker End (ms): Min: 92405784.4, Avg: 92405784.4, Max: 92405784.4, Diff: 0.0]   [Code Root Fixup: 0.2 ms]   [Code Root Purge: 0.0 ms]   [Clear CT: 0.4 ms]   [Other: 117.9 ms]      [Choose CSet: 0.0 ms]      [Ref Proc: 116.1 ms]      [Ref Enq: 0.1 ms]      [Redirty Cards: 0.1 ms]      [Humongous Register: 0.1 ms]      [Humongous Reclaim: 0.0 ms]      [Free CSet: 0.8 ms]   [Eden: 2372.0M(2372.0M)->0.0B(2370.0M) Survivors: 84.0M->86.0M Heap: 2600.6M(4096.0M)->230.6M(4096.0M)] [Times: user=0.14 sys=0.00, real=0.12 secs]

我把cms机器的gc日志也看了下:

2018-12-21T10:47:50.823+0800: 1937384.645: [GC (Allocation Failure) 2018-12-21T10:47:50.823+0800: 1937384.645: [ParNew: 424290K->15765K(460096K), 0.0263855 secs] 1870054K->1427M(4143232K), 0.0266841 secs] [Times: user=0.07 sys=0.00, real=0.02 secs]

如果仔细去看会发现,G1在进行youngGC的时候,eden区是智能(根据某些算法)分配了2372M的内存,且一次性回收完了,部分存活到了Survivors区,堆内存是从2600M直接到230M 而CMS的youngGC是424M左右回收到157M左右,堆内存从1826M到1427M

很明显B机器的年轻代空间分配必然有问题,

jmap --heap PID

看了下AB两个机器的堆内存情况

A机器(命令执行时间和上图的不一致,所以结果有小差异,这种差异可以忽略不计):

Heap Configuration:   MinHeapFreeRatio         = 40   MaxHeapFreeRatio         = 70   MaxHeapSize              = 4294967296 (4096.0MB)   NewSize                  = 1572864000 (1500.0MB)   MaxNewSize               = 1572864000 (1500.0MB)   OldSize                  = 2722103296 (2596.0MB)   NewRatio                 = 2   SurvivorRatio            = 8   MetaspaceSize            = 268435456 (256.0MB)   CompressedClassSpaceSize = 1073741824 (1024.0MB)   MaxMetaspaceSize         = 536870912 (512.0MB)   G1HeapRegionSize         = 0 (0.0MB)Heap Usage:New Generation (Eden + 1 Survivor Space):   capacity = 1415577600 (1350.0MB)   used     = 1191482688 (1136.2864379882812MB)   free     = 224094912 (213.71356201171875MB)   84.16936577690973% usedEden Space:   capacity = 1258291200 (1200.0MB)   used     = 1145255184 (1092.200454711914MB)   free     = 113036016 (107.79954528808594MB)   91.01670455932617% usedFrom Space:   capacity = 157286400 (150.0MB)   used     = 46227504 (44.08598327636719MB)   free     = 111058896 (105.91401672363281MB)   29.390655517578125% usedTo Space:   capacity = 157286400 (150.0MB)   used     = 0 (0.0MB)   free     = 157286400 (150.0MB)   0.0% usedconcurrent mark-sweep generation:   capacity = 2722103296 (2596.0MB)   used     = 60697216 (57.8853759765625MB)   free     = 2661406080 (2538.1146240234375MB)   2.229791062271283% used

B机器(命令执行时间和上图的不一致,所以结果有小差异,这种差异可以忽略不计):

Heap Configuration:   MinHeapFreeRatio         = 40   MaxHeapFreeRatio         = 70   MaxHeapSize              = 4294967296 (4096.0MB)   NewSize                  = 348913664 (332.75MB)   MaxNewSize               = 348913664 (332.75MB)   OldSize                  = 3946053632 (3763.25MB)   NewRatio                 = 2   SurvivorRatio            = 8   MetaspaceSize            = 268435456 (256.0MB)   CompressedClassSpaceSize = 1073741824 (1024.0MB)   MaxMetaspaceSize         = 536870912 (512.0MB)   G1HeapRegionSize         = 0 (0.0MB)Heap Usage:New Generation (Eden + 1 Survivor Space):   capacity = 314048512 (299.5MB)   used     = 210272216 (200.53121185302734MB)   free     = 103776296 (98.96878814697266MB)   66.95532950017608% usedEden Space:   capacity = 279183360 (266.25MB)   used     = 197697008 (188.53855895996094MB)   free     = 81486352 (77.71144104003906MB)   70.81260430421068% usedFrom Space:   capacity = 34865152 (33.25MB)   used     = 12575208 (11.992652893066406MB)   free     = 22289944 (21.257347106933594MB)   36.068129001703475% usedTo Space:   capacity = 34865152 (33.25MB)   used     = 0 (0.0MB)   free     = 34865152 (33.25MB)   0.0% usedconcurrent mark-sweep generation:   capacity = 3946053632 (3763.25MB)   used     = 1266078912 (1207.4269409179688MB)   free     = 2679974720 (2555.8230590820312MB)   32.084685867746465% used

很明显了,A机器因为使用了G1,智能的分配给了年轻代非常大的内存空间 而B机器只有可怜的299

到这里可以很好的判断,在一个时间段里,年轻代产生的对象太多了,导致(B机器)年轻代的空间根本不够放这些对象,所以经过一轮轮的youngGC它们不得不都被放到老年代去了

所以马上想到调整B机器的年轻代空间大小,JVM参数加了个值

-Xmn1500m

设置年轻代分配1500M内存,观察GC日志:

2018-12-21T10:32:12.667+0800: 45796.661: [GC (Allocation Failure) 2018-12-21T10:32:12.668+0800: 45796.661: [ParNew: 1286287K->60816K(1382400K), 0.0254485 secs] 1371434K->145987K(4040704K), 0.0257676 secs] [Times: user=0.13 sys=0.01, real=0.03 secs]

年轻代从1286M回收到608M左右,一切恢复正常。 堆内存占用总量从此也变成很平稳的波峰波谷线了,不再是一直缓缓向上。(不过这里看起来年轻代还要设置到2G才合适)

另外新的问题又来了,我们看到其实不管是fullGC还是youngGC的时候耗时最长的操作是

Ref Proc: 116.1 ms

一次youngGC的耗时,90%是在这里,之前的fullGC更恐怖,时间长达50S,其中大部分时间在处对象引用

我们加上了这个JVM参数:

-XX:+ParallelRefProcEnabled

看了下oracle官方文档:

Reference Object Processing Takes Too Long. Information about the time taken for processing of Reference Objects is shown in the Ref Proc and Ref Enq phases. During the Ref Proc phase, G1 updates the referents of Reference Objects according to the requirements of their particular type. In Ref Enq, G1 enqueues Reference Objects into their respective reference queue if their referents were found dead. If these phases take too long, then consider enabling parallelization of these phases by using the option -XX:+ParallelRefProcEnabled.

到了这里还有一个疑问没解开,G1为什么知道给年轻代分配更大的内存空间?

为此我找了一下G1的GC日志,在A机器改为G1垃圾回收的时候开始监听日志,看到了一个有趣的现象:

...   2018-12-19T19:19:20.289+0800: 6.084: [GC pause (G1 Evacuation Pause) (young), 0.0304869 secs]   [Eden: 204.0M(204.0M)->0.0B(206.0M) Survivors: 0.0B->22.0M Heap: 204.0M(4096.0M)->21.7M(4096.0M)]    ...   ...   2018-12-19T19:19:22.919+0800: 8.715: [GC pause (G1 Evacuation Pause) (young), 0.0294211 secs]   [Eden: 206.0M(206.0M)->0.0B(932.0M) Survivors: 22.0M->30.0M Heap: 227.7M(4096.0M)->30.0M(4096.0M)]   ...   ...   2018-12-19T19:19:30.293+0800: 16.089: [GC pause (G1 Evacuation Pause) (young), 0.0805601 secs]   [Eden: 932.0M(932.0M)->0.0B(1036.0M) Survivors: 30.0M->78.0M Heap: 962.0M(4096.0M)->77.0M(4096.0M)]   ...   ...   2018-12-19T19:19:35.918+0800: 21.714: [GC pause (G1 Evacuation Pause) (young), 0.1179542 secs]   [Eden: 1036.0M(1036.0M)->0.0B(2366.0M) Survivors: 78.0M->90.0M Heap: 1113.0M(4096.0M)->89.0M(4096.0M)]   ...   ...   2018-12-19T19:19:54.483+0800: 40.279: [GC pause (G1 Evacuation Pause) (young), 0.2070769 secs]   [Eden: 2366.0M(2366.0M)->0.0B(2320.0M) Survivors: 90.0M->136.0M Heap: 2455.0M(4096.0M)->136.0M(4096.0M)]   ...   ...   2018-12-19T19:20:12.459+0800: 58.254: [GC pause (G1 Evacuation Pause) (young), 0.2024750 secs]   [Eden: 2320.0M(2320.0M)->0.0B(2320.0M) Survivors: 136.0M->136.0M Heap: 2456.0M(4096.0M)->135.0M(4096.0M)]   ...   ...   2018-12-19T19:20:30.707+0800: 76.503: [GC pause (G1 Evacuation Pause) (young), 0.1857774 secs]   [Eden: 2320.0M(2320.0M)->0.0B(2322.0M) Survivors: 136.0M->134.0M Heap: 2455.0M(4096.0M)->134.0M(4096.0M)]   ...   ...   2018-12-19T19:20:49.051+0800: 94.846: [GC pause (G1 Evacuation Pause) (young), 0.2006349 secs]   [Eden: 2322.0M(2322.0M)->0.0B(2322.0M) Survivors: 134.0M->134.0M Heap: 2456.0M(4096.0M)->133.0M(4096.0M)]   ...   ...   2018-12-19T19:21:07.473+0800: 113.269: [GC pause (G1 Evacuation Pause) (young), 0.1931652 secs]   [Eden: 2322.0M(2322.0M)->0.0B(2320.0M) Survivors: 134.0M->136.0M Heap: 2455.0M(4096.0M)->135.0M(4096.0M)]   ...

以上是一段连续的更改G1之后马上产生的GC日志情况 仔细看G1年轻代的智能分配,可以看出来G1先是分配了204M,结果发现内存根本不够,因为回收完了之后变成206M了。 于是乎分配了206M,结果发现206M也不够,变成932M。。。直到2322M的时候才够用,于是乎就将年轻代控制在了这个值。

2018.12.21 21:03 周五,同事都回家了,哈,终于有时间静下来写个博客记录下这两天处理的这个问题。

转载于:https://my.oschina.net/110NotFound/blog/2991762

你可能感兴趣的文章
LVS之DR模式原理与实践
查看>>
Docker的系统资源限制及验证
查看>>
c++ ios_base register_callback方法使用
查看>>
Java中为什么需要Object类,Object类为什么是所有类的父类
查看>>
angularjs-paste-upload
查看>>
linux基础命令 head
查看>>
objective c:import和include的区别, ""和<>区别
查看>>
The Shared folder with you
查看>>
sax方式解析XML学习笔记
查看>>
Springboot配置(上)
查看>>
java--Eclipse for mac 代码提示(代码助手,代码联想)快捷键修改
查看>>
left join on/right join on/inner join on/full join on连接
查看>>
template.helper 多参数
查看>>
Android 四大组件之一(Activity)
查看>>
扫描(一)
查看>>
PIE SDK矢量数据的读取
查看>>
Centos7安装rabbitmq server 3.6.0
查看>>
iostat命令学习
查看>>
SQL 三种分页方式
查看>>
查看linux是ubuntu还是centos
查看>>