CDH 5.13,HIVE:不良 : Hive Metastore canary 创建数据库失败。
在CDH中,安装完Hive,开始可能是好的,但是过一下就会遇到如题所示的错误,具体如下:
在日志中,报错如下:
文件:hadoop-cmf-hive-HIVEMETASTORE-cdh1.log.out
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 |
[root@cdh1 hive]# pwd /var/log/hive [root@cdh1 hive]# ls -ltr total 8912 drwxr-xr-x 2 hive hive 4096 Aug 30 15:06 stacks drwx------ 2 hive hive 4096 Aug 30 15:07 audit drwx------ 2 hive hive 4096 Aug 30 16:28 lineage drwxr-xr-x 2 hive hive 4096 Aug 30 16:28 operation_logs -rw-r--r-- 1 hive hive 177204 Aug 30 16:28 hadoop-cmf-hive-HIVESERVER2-cdh1.log.out drwxr-xr-x 2 hive hive 4096 Aug 30 17:29 metrics-hiveserver2 drwxr-xr-x 2 hive hive 4096 Aug 30 17:29 metrics-hivemetastore -rw-r--r-- 1 hive hive 8914029 Aug 30 17:29 hadoop-cmf-hive-HIVEMETASTORE-cdh1.log.out [root@cdh1 hive]# [root@cdh1 hive]# tail -f hadoop-cmf-hive-HIVEMETASTORE-cdh1.log.out (过多的输出,....) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619) at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503) at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173) at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378) at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328) at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396) at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:235) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143) at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784) at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219) at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065) at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913) at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:570) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.createDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:908) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:947) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:33,414 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-4]: 4: source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-30 17:14:33,415 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-4]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-30 17:14:33,417 ERROR org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-4]: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:590) at org.apache.hadoop.hive.metastore.ObjectStore.getJDODatabase(ObjectStore.java:646) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:636) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2629) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619) at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503) at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173) at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378) at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328) at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396) at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621) at org.datanucleus.store.query.Query.executeQuery(Query.java:1786) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:590) at org.apache.hadoop.hive.metastore.ObjectStore.getJDODatabase(ObjectStore.java:646) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:636) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2629) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:33,418 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-4]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, returning NoSuchObjectException MetaException(message:You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2638) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:33,425 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-4-thread-4]: Retrying HMSHandler after 2000 ms (attempt 10 of 10) with error: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:570) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.createDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:908) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:947) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619) at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503) at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173) at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378) at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328) at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396) at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:235) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143) at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784) at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219) at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065) at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913) at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:570) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.createDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:908) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:947) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:35,427 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-4]: 4: source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-30 17:14:35,427 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-4]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-30 17:14:35,430 ERROR org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-4]: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:590) at org.apache.hadoop.hive.metastore.ObjectStore.getJDODatabase(ObjectStore.java:646) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:636) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2629) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619) at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503) at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173) at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378) at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328) at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396) at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621) at org.datanucleus.store.query.Query.executeQuery(Query.java:1786) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:590) at org.apache.hadoop.hive.metastore.ObjectStore.getJDODatabase(ObjectStore.java:646) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:636) at org.apache.hadoop.hive.metastore.ObjectStore$2.getJdoResult(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2629) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:35,431 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-4]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, returning NoSuchObjectException MetaException(message:You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1) at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2638) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabaseInternal(ObjectStore.java:628) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:612) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.getDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database_core(HiveMetaStore.java:997) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:931) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:35,436 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-4-thread-4]: HMSHandler Fatal error: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:732) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:570) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.createDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:908) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:947) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619) at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503) at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173) at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378) at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328) at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430) at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396) at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:235) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143) at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784) at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219) at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065) at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1913) at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:727) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:752) at org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:570) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) at com.sun.proxy.$Proxy13.createDatabase(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:908) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:947) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy15.create_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8951) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:8935) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2018-08-30 17:14:35,437 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-4]: </PERFLOG method=create_database start=1535620455262 end=1535620475437 duration=20175 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=4 retryCount=-1 error=true> 2018-08-30 17:14:35,540 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-4]: <PERFLOG method=shutdown from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-30 17:14:35,540 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-4]: 4: Cleaning up thread local RawStore... 2018-08-30 17:14:35,541 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-4]: ugi=hue ip=10.158.1.111 cmd=Cleaning up thread local RawStore... 2018-08-30 17:14:35,541 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-4]: 4: Done cleaning up thread local RawStore 2018-08-30 17:14:35,541 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-4]: ugi=hue ip=10.158.1.111 cmd=Done cleaning up thread local RawStore 2018-08-30 17:14:35,541 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-4]: </PERFLOG method=shutdown start=1535620475540 end=1535620475541 duration=1 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=4 retryCount=0 error=false> (过多的输出,....) |
Hive配置MySQL Connector驱动:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[root@cdh1 ~]# ls -ltr /opt/cloudera/parcels/CDH/lib/hive/lib/ | grep mysql [root@cdh1 ~]# [root@cdh1 ~]# cp /software/mysql-connector-java-5.1.40-bin.jar /opt/cloudera/parcels/CDH/lib/hive/lib/ [root@cdh1 ~]# [root@cdh1 ~]# ls -ltr /opt/cloudera/parcels/CDH/lib/hive/lib/ | grep mysql -rw-r--r-- 1 root root 990927 Aug 31 08:54 mysql-connector-java-5.1.40-bin.jar [root@cdh1 ~]# [root@cdh1 ~]# ls -ltr /usr/share/cmf/lib/ | grep mysql [root@cdh1 ~]# [root@cdh1 ~]# cp /software/mysql-connector-java-5.1.40-bin.jar /usr/share/cmf/lib/ [root@cdh1 ~]# [root@cdh1 ~]# ls -ltr /usr/share/cmf/lib/ | grep mysql -rw-r--r-- 1 root root 990927 Aug 31 08:56 mysql-connector-java-5.1.40-bin.jar [root@cdh1 ~]# [root@cdh1 ~]# chmod -R 777 /usr/share/cmf/lib/mysql-connector-java-5.1.40-bin.jar [root@cdh1 ~]# chmod -R 777 /opt/cloudera/parcels/CDH/lib/hive/lib/mysql-connector-java-5.1.40-bin.jar [root@cdh1 ~]# |
然后CM里面重启服务:
日志:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
[root@cdh1 ~]# tail -f /var/log/hive/hadoop-cmf-hive-HIVEMETASTORE-cdh1.log.out 2018-08-30 21:09:35,780 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-48]: </PERFLOG method=create_database start=1535634555658 end=1535634575780 duration=20122 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=48 retryCount=-1 error=true> 2018-08-30 21:09:35,882 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-48]: <PERFLOG method=shutdown from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-30 21:09:35,882 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-48]: 48: Cleaning up thread local RawStore... 2018-08-30 21:09:35,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-48]: ugi=hue ip=10.158.1.111 cmd=Cleaning up thread local RawStore... 2018-08-30 21:09:35,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-48]: 48: Done cleaning up thread local RawStore 2018-08-30 21:09:35,883 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-48]: ugi=hue ip=10.158.1.111 cmd=Done cleaning up thread local RawStore 2018-08-30 21:09:35,883 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-48]: </PERFLOG method=shutdown start=1535634575882 end=1535634575883 duration=1 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=48 retryCount=0 error=false> 2018-08-30 21:13:35,289 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-49]: <PERFLOG method=set_ugi from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-30 21:13:35,289 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-49]: </PERFLOG method=set_ugi start=1535634815289 end=1535634815289 duration=0 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=49 retryCount=0 error=false> 2018-08-30 21:13:37,397 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [Thread-3]: Shutting down hive metastore. ==================== 2018-08-31 08:57:58,891 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Starting hive metastore on port 9083 2018-08-31 08:57:59,468 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-08-31 08:58:00,246 INFO org.apache.hadoop.hive.metastore.ObjectStore: [main]: ObjectStore, initialize called 2018-08-31 08:58:00,529 INFO DataNucleus.Persistence: [main]: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 2018-08-31 08:58:00,529 INFO DataNucleus.Persistence: [main]: Property datanucleus.cache.level2 unknown - will be ignored 2018-08-31 08:58:02,338 INFO org.apache.hadoop.hive.metastore.ObjectStore: [main]: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2018-08-31 08:58:04,341 INFO DataNucleus.Datastore: [main]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:04,342 INFO DataNucleus.Datastore: [main]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:04,583 INFO DataNucleus.Datastore: [main]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:04,584 INFO DataNucleus.Datastore: [main]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:04,835 INFO DataNucleus.Query: [main]: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2018-08-31 08:58:04,839 INFO org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [main]: Using direct SQL, underlying DB is MYSQL 2018-08-31 08:58:04,841 INFO org.apache.hadoop.hive.metastore.ObjectStore: [main]: Initialized ObjectStore 2018-08-31 08:58:05,272 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Added admin role in metastore 2018-08-31 08:58:05,279 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Added public role in metastore 2018-08-31 08:58:05,378 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: No user is added in admin role, since config is empty 2018-08-31 08:58:05,379 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Begin calculating metadata count metrics. 2018-08-31 08:58:05,401 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Finished metadata count metrics: 2 databases, 1 tables, 2 partitions. 2018-08-31 08:58:05,653 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Starting DB backed MetaStore Server with SetUGI enabled 2018-08-31 08:58:05,657 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Started the new metaserver on port [9083]... 2018-08-31 08:58:05,657 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Options.minWorkerThreads = 200 2018-08-31 08:58:05,657 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: Options.maxWorkerThreads = 100000 2018-08-31 08:58:05,657 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [main]: TCP keepalive = true 2018-08-31 08:58:56,961 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=set_ugi from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:56,970 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=set_ugi start=1535677136960 end=1535677136970 duration=10 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:56,985 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:56,985 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:56,988 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:56,988 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-08-31 08:58:57,166 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-1]: ObjectStore, initialize called 2018-08-31 08:58:57,180 INFO DataNucleus.Query: [pool-4-thread-1]: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2018-08-31 08:58:57,182 INFO org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-4-thread-1]: Using direct SQL, underlying DB is MYSQL 2018-08-31 08:58:57,182 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-4-thread-1]: A Gauge with name [active_jdo_transactions] already exists. The old gauge will be overwritten, but this is not recommended 2018-08-31 08:58:57,183 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-1]: Initialized ObjectStore 2018-08-31 08:58:57,304 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_table start=1535677136985 end=1535677137304 duration=319 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:57,322 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=drop_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:57,323 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:57,324 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:57,350 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:57,350 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:57,982 ERROR org.apache.hadoop.hdfs.KeyProviderCache: [pool-4-thread-1]: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! 2018-08-31 08:58:58,143 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,143 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,205 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,205 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,339 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,340 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,396 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,396 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,526 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:58:58,655 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-1]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:58:58,655 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:58:58,659 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=drop_table_with_environment_context start=1535677137322 end=1535677138659 duration=1337 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:58,774 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:58,775 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,775 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,781 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_database start=1535677138774 end=1535677138781 duration=7 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:58,792 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_all_tables from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:58,793 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,795 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,806 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_all_tables start=1535677138792 end=1535677138806 duration=14 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:58,820 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=drop_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:58,821 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,821 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,828 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,828 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,830 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 08:58:58,830 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 08:58:58,832 INFO DataNucleus.Datastore: [pool-4-thread-1]: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 2018-08-31 08:58:58,890 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-1]: Dropping database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c along with all tables 2018-08-31 08:58:58,950 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,971 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-1]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c1535677138966 2018-08-31 08:58:58,971 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:58,973 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=drop_database start=1535677138820 end=1535677138973 duration=153 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,088 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=create_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,088 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-31 08:58:59,089 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-31 08:58:59,093 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-1]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, returning NoSuchObjectException 2018-08-31 08:58:59,103 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-1]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:59,347 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=create_database start=1535677139088 end=1535677139347 duration=259 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,451 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,451 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:59,452 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:58:59,458 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_database start=1535677139451 end=1535677139458 duration=7 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,469 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=create_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,470 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 create_table: Table(tableName:CM_TEST_TABLE, dbName:cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:s, type:string, comment:test string), FieldSchema(name:f, type:float, comment:test float), FieldSchema(name:a, type:array<map<string,struct<p1:int,p2:int>>>, comment:test complex type)], location:null, inputFormat:null, outputFormat:null, compressed:false, numBuckets:1, serdeInfo:SerDeInfo(name:CM_TEST_TABLE, serializationLib:null, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}), partitionKeys:[FieldSchema(name:p1, type:string, comment:partition-key-1), FieldSchema(name:p2, type:int, comment:partition-key-2)], parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:null) 2018-08-31 08:58:59,471 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_table: Table(tableName:CM_TEST_TABLE, dbName:cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:s, type:string, comment:test string), FieldSchema(name:f, type:float, comment:test float), FieldSchema(name:a, type:array<map<string,struct<p1:int,p2:int>>>, comment:test complex type)], location:null, inputFormat:null, outputFormat:null, compressed:false, numBuckets:1, serdeInfo:SerDeInfo(name:CM_TEST_TABLE, serializationLib:null, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}), partitionKeys:[FieldSchema(name:p1, type:string, comment:partition-key-1), FieldSchema(name:p2, type:int, comment:partition-key-2)], parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:null) 2018-08-31 08:58:59,501 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-1]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:58:59,601 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=create_table_with_environment_context start=1535677139469 end=1535677139601 duration=132 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,704 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,705 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,705 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,739 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_table start=1535677139704 end=1535677139739 duration=35 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,751 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=add_partition_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,752 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,753 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,785 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-1]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table/p1=p0/p2=420 2018-08-31 08:58:59,854 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=add_partition_with_environment_context start=1535677139751 end=1535677139854 duration=103 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:58:59,959 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=add_partition_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:58:59,959 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,959 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:58:59,987 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-1]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table/p1=p1/p2=421 2018-08-31 08:59:00,045 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=add_partition_with_environment_context start=1535677139959 end=1535677140045 duration=86 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:59:00,147 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:59:00,147 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:59:00,147 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:59:00,172 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_table start=1535677140147 end=1535677140172 duration=25 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:59:00,173 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=drop_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:59:00,173 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:59:00,173 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 08:59:00,194 ERROR org.apache.hadoop.hdfs.KeyProviderCache: [pool-4-thread-1]: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! 2018-08-31 08:59:00,297 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:59:00,310 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-1]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table1535677140306 2018-08-31 08:59:00,310 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 08:59:00,310 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=drop_table_with_environment_context start=1535677140173 end=1535677140310 duration=137 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:59:00,413 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:59:00,413 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,414 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,419 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=get_database start=1535677140413 end=1535677140419 duration=6 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:59:00,421 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=drop_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:59:00,422 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,422 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,426 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,426 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,427 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 08:59:00,428 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 08:59:00,431 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-1]: Dropping database cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c along with all tables 2018-08-31 08:59:00,439 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,455 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-1]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c1535677140452 2018-08-31 08:59:00,455 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-1]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 08:59:00,456 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=drop_database start=1535677140421 end=1535677140456 duration=35 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 08:59:00,563 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: <PERFLOG method=shutdown from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 08:59:00,564 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: Cleaning up thread local RawStore... 2018-08-31 08:59:00,564 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=Cleaning up thread local RawStore... 2018-08-31 08:59:00,565 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-1]: 1: Done cleaning up thread local RawStore 2018-08-31 08:59:00,567 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-1]: ugi=hue ip=10.158.1.111 cmd=Done cleaning up thread local RawStore 2018-08-31 08:59:00,568 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-1]: </PERFLOG method=shutdown start=1535677140563 end=1535677140568 duration=5 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=1 retryCount=0 error=false> 2018-08-31 09:03:56,919 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=set_ugi from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:56,920 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=set_ugi start=1535677436919 end=1535677436920 duration=1 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:56,921 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:56,921 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:56,922 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:56,922 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-08-31 09:03:57,098 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-2]: ObjectStore, initialize called 2018-08-31 09:03:57,110 INFO DataNucleus.Query: [pool-4-thread-2]: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2018-08-31 09:03:57,111 INFO org.apache.hadoop.hive.metastore.MetaStoreDirectSql: [pool-4-thread-2]: Using direct SQL, underlying DB is MYSQL 2018-08-31 09:03:57,111 WARN org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics: [pool-4-thread-2]: A Gauge with name [active_jdo_transactions] already exists. The old gauge will be overwritten, but this is not recommended 2018-08-31 09:03:57,112 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-2]: Initialized ObjectStore 2018-08-31 09:03:57,116 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1535677436921 end=1535677437116 duration=195 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=-1 error=true> 2018-08-31 09:03:57,118 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,118 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:57,119 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:57,122 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-2]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, returning NoSuchObjectException 2018-08-31 09:03:57,122 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1535677437118 end=1535677437122 duration=4 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=-1 error=true> 2018-08-31 09:03:57,123 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=create_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,123 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-31 09:03:57,124 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, parameters:null) 2018-08-31 09:03:57,127 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-2]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c, returning NoSuchObjectException 2018-08-31 09:03:57,139 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:57,171 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=create_database start=1535677437123 end=1535677437171 duration=48 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,273 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,274 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:57,274 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:57,280 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1535677437273 end=1535677437280 duration=7 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,281 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=create_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,281 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 create_table: Table(tableName:CM_TEST_TABLE, dbName:cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:s, type:string, comment:test string), FieldSchema(name:f, type:float, comment:test float), FieldSchema(name:a, type:array<map<string,struct<p1:int,p2:int>>>, comment:test complex type)], location:null, inputFormat:null, outputFormat:null, compressed:false, numBuckets:1, serdeInfo:SerDeInfo(name:CM_TEST_TABLE, serializationLib:null, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}), partitionKeys:[FieldSchema(name:p1, type:string, comment:partition-key-1), FieldSchema(name:p2, type:int, comment:partition-key-2)], parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:null) 2018-08-31 09:03:57,282 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 create_table: Table(tableName:CM_TEST_TABLE, dbName:cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c, owner:null, createTime:0, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:s, type:string, comment:test string), FieldSchema(name:f, type:float, comment:test float), FieldSchema(name:a, type:array<map<string,struct<p1:int,p2:int>>>, comment:test complex type)], location:null, inputFormat:null, outputFormat:null, compressed:false, numBuckets:1, serdeInfo:SerDeInfo(name:CM_TEST_TABLE, serializationLib:null, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}), partitionKeys:[FieldSchema(name:p1, type:string, comment:partition-key-1), FieldSchema(name:p2, type:int, comment:partition-key-2)], parameters:null, viewOriginalText:null, viewExpandedText:null, tableType:null) 2018-08-31 09:03:57,293 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 09:03:57,374 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=create_table_with_environment_context start=1535677437281 end=1535677437374 duration=93 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,477 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,477 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,477 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,508 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1535677437477 end=1535677437508 duration=31 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,510 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=add_partition_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,510 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,511 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,538 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table/p1=p0/p2=420 2018-08-31 09:03:57,620 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=add_partition_with_environment_context start=1535677437510 end=1535677437620 duration=110 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,723 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=add_partition_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,723 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,724 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 add_partition : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,753 INFO org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table/p1=p1/p2=421 2018-08-31 09:03:57,818 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=add_partition_with_environment_context start=1535677437723 end=1535677437818 duration=95 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,919 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,920 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,920 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,940 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1535677437919 end=1535677437940 duration=21 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:57,941 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=drop_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:57,942 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,942 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_table : db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c tbl=CM_TEST_TABLE 2018-08-31 09:03:57,962 ERROR org.apache.hadoop.hdfs.KeyProviderCache: [pool-4-thread-2]: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !! 2018-08-31 09:03:58,061 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-2]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 09:03:58,085 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-2]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 09:03:58,086 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-2]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c/cm_test_table 2018-08-31 09:03:58,086 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=drop_table_with_environment_context start=1535677437941 end=1535677438086 duration=145 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:58,187 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:58,188 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,188 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,193 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1535677438187 end=1535677438193 duration=6 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:58,194 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=drop_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:58,194 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,194 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 drop_database: cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,198 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,198 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_all_tables: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,200 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 09:03:58,200 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=source:10.158.1.111 get_functions: db=cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c pat=* 2018-08-31 09:03:58,205 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-4-thread-2]: Dropping database cloudera_manager_metastore_canary_test_db_hive_hivemetastore_5689189d9738e7977f568918445c1e1c along with all tables 2018-08-31 09:03:58,214 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-2]: deleting hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,230 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-4-thread-2]: Moved: 'hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c' to trash at: hdfs://cdh1:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c1535677438227 2018-08-31 09:03:58,231 INFO hive.metastore.hivemetastoressimpl: [pool-4-thread-2]: Moved to trash: hdfs://cdh1:8020/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_5689189d9738e7977f568918445c1e1c 2018-08-31 09:03:58,231 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=drop_database start=1535677438194 end=1535677438231 duration=37 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> 2018-08-31 09:03:58,333 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=shutdown from=org.apache.hadoop.hive.metastore.RetryingHMSHandler> 2018-08-31 09:03:58,334 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: Cleaning up thread local RawStore... 2018-08-31 09:03:58,334 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=Cleaning up thread local RawStore... 2018-08-31 09:03:58,334 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: Done cleaning up thread local RawStore 2018-08-31 09:03:58,335 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=hue ip=10.158.1.111 cmd=Done cleaning up thread local RawStore 2018-08-31 09:03:58,335 INFO org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=shutdown start=1535677438333 end=1535677438335 duration=2 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false> |
可以看到,这时候就没有报错了。
————————————————————————
Done。
之前是因为一直以为hive的mysql驱动是在【/var/lib/hive/】这里配置的,所以,才会导致上面出错,因为事实上报错时,是没有或者走的是默认的MySQL驱动,版本比较老,当然有的语法就不支持,然后就报错了。
不过前期就算没有按照本文的方式部署MySQL的驱动,CM也会用默认的驱动去连接指向的MySQL服务,创建53张数据表,… 后面有问题,无非是因为MySQL的版本较低而已,并非是不能用。
我在本地使用CM(5.13.0)的虚拟机,在配置sentry授权后,也出现了同样的错误提示,但参照你的方案没有成功。查看了hive-metastore的日志,发现了如下的内容:
2019-11-08 15:57:08,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-5-thread-3]: 3: create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE, parameters:null)
2019-11-08 15:57:08,435 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-3]: ugi=hue/quickstart.cloudera@CLOUDERA ip=/192.168.56.104 cmd=create_database: Database(name:cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE, description:Cloudera Manager Metastore Canary Test Database, locationUri:/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE, parameters:null)
2019-11-08 15:57:08,437 WARN org.apache.hadoop.hive.metastore.ObjectStore: [pool-5-thread-3]: Failed to get database cloudera_manager_metastore_canary_test_db_hive_HIVEMETASTORE, returning NoSuchObjectException
2019-11-08 15:57:08,441 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-3]: MetaException(message:null)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2136)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database_core(HiveMetaStore.java:932)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:993)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy10.create_database(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:9570)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_database.getResult(ThriftHiveMetastore.java:9554)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:736)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:731)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:731)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
不知道你有没好的建议,谢谢。