[hsu@server1 ~]$ hbase shell 15/03/15 15:58:57 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.6-cdh5.2.0, rUnknown, Sat Oct 11 15:15:15 PDT 2014
hbase(main):001:0> list TABLE aaa test 2 row(s) in 3.2560 seconds
[hsu@server1 ~]$ hbase version 15/03/15 16:03:08 INFO util.VersionInfo: HBase 0.98.6-cdh5.2.0
[hsu@server1 ~]$ which hbase /usr/bin/hbase
3、当前系统软件信息
1 2 3 4 5 6 7 8 9
[hsu@server1 ~]$ java -version java version "1.7.0_09-icedtea" OpenJDK Runtime Environment (rhel-2.3.4.1.el6_3-x86_64) OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)
[hsu@server1 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.4 (Santiago) [hsu@server1 ~]$ uname -a Linux server1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
Cludera cdh version hbase的 start-hbase.sh and stop-hbase.sh脚本我硬是没找到!
4.4.1 CDH 版本hbase
cdh-5.3.0-0.98.6-cdh5.2.0版本集群copy phoenix到lib
重启集群一直无法成功,删除所有节点/opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-core-4.3.0.jar 包后重启,居然不报错了,奇葩吧! sudo rm -f /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-core-4.3.0.jar 出现这个奇葩问题是我没有注意看安装文档,我copy phoenix-core-[version].jar文件到所有的hbase/lib下面,导致重启hbase的registerserver一直启动失败: Add the phoenix-[version]-server.jar to the classpath of all HBase region server and master and remove any previous version. An easy way to do this is to copy it into the HBase lib directory (use phoenix-core-[version].jar for Phoenix 3.x)
[root@server1 bin]# hbase shell 2015-03-15 19:23:35,387 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.6.1-hadoop2, r96a1af660b33879f19a47e9113bf802ad59c7146, Sun Sep 14 21:27:25 PDT 2014
hbase(main):001:0> list TABLE SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hbase-0.98.6.1/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. t1 test2 2 row(s) in 20.2820 seconds
[root@server1 bin]# ./sqlline.py server1:2181 Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:server1:2181 none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix:server1:2181 15/03/15 20:03:23 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://server1:9000/user/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs Connected to: Phoenix (version 4.3) Driver: PhoenixEmbeddedDriver (version 4.3) Autocommit status: true Transaction isolation: TRANSACTION_READ_COMMITTED Building list of tables and columns for tab-completion (set fastconnect to true to skip)... 70/70 (100%) Done Done sqlline version 1.1.8 0: jdbc:phoenix:server1:2181> !help !all Execute the specified SQL against all the current connections !autocommit Set autocommit mode on or off !batch Start or execute a batch of statements !brief Set verbose mode off !call Execute a callable statement !close Close the current connection to the database !closeall Close all current open connections !columns List all the columns for the specified table !commit Commit the current transaction (if autocommit is off) !connect Open a new connection to the database. !dbinfo Give metadata information about the database !describe Describe a table !dropall Drop all tables in the current database !exportedkeys List all the exported keys for the specified table !go Select the current connection !help Print a summary of command usage !history Display the command history !importedkeys List all the imported keys for the specified table !indexes List all the indexes for the specified table !isolation Set the transaction isolation for this connection !list List the current connections !manual Display the SQLLine manual !metadata Obtain metadata information !nativesql Show the native SQL for the specified statement !outputformat Set the output format for displaying results (table,vertical,csv,tsv,xmlattrs,xmlelements) !primarykeys List all the primary keys for the specified table !procedures List all the procedures !properties Connect to the database specified in the properties file(s) !quit Exits the program !reconnect Reconnect to the database !record Record all output to the specified file !rehash Fetch table and column names for command completion !rollback Roll back the current transaction (if autocommit is off) !run Run a script from the specified file !save Save the current variabes and aliases !scan Scan for installed JDBC drivers !script Start saving a script to a file !set Set a sqlline variable 0: jdbc:phoenix:server1:2181> !list 1 active connection: #0 open jdbc:phoenix:server1:2181
0: jdbc:phoenix:server1:2181> select * from SYSTEM.CATALOG; +------------------------------------------+------------------------------------------+------------------------------------------+------+ | TENANT_ID | TABLE_SCHEM | TABLE_NAME | | +------------------------------------------+------------------------------------------+------------------------------------------+------+ | | | TEST | | | | | TEST | MYCO | | | | TEST | MYKE | | | SYSTEM | CATALOG | | | | SYSTEM | CATALOG | ARRA | .....省略...
#在hbase验证是否可以看到有test表,也可以看到phoenix生成的系统表 hbase(main):002:0> list TABLE SYSTEM.CATALOG SYSTEM.SEQUENCE SYSTEM.STATS TEST TEST01 5 row(s) in 2.3750 seconds
WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://server01:54310/hbase/lib, ignored java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at FileCopyToHdfs.readFromHdfs(FileCopyToHdfs.java:65) at FileCopyToHdfs.main(FileCopyToHdfs.java:26)
/** - Created by Andrew on 2015/3/15. */ public class PhoenixConnectTest { public static void main(String[] args) throws SQLException { Statement stmt = null; ResultSet rset = null; //String driver = "org.apache.phoenix.jdbc.PhoenixDriver";
Connection con = DriverManager.getConnection("jdbc:phoenix:server1:2181"); // Connection con = DriverManager.getConnection("jdbc:phoenix:server1,server2,server3"); stmt = con.createStatement();
// stmt.executeUpdate("create table test (mykey integer not null primary key, mycolumn varchar)"); // stmt.executeUpdate("upsert into test values (1,'Hello')"); // stmt.executeUpdate("upsert into test values (2,'World!')"); con.commit();
PreparedStatement statement = con.prepareStatement("select * from test"); rset = statement.executeQuery(); while (rset.next()) { System.out.println(rset.getString("mycolumn")); } statement.close(); con.close(); } }
[root@server3 bin]# hbase shell 2015-07-16 15:32:41,808 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.6.1-hadoop2, r96a1af660b33879f19a47e9113bf802ad59c7146, Sun Sep 14 21:27:25 PDT 2014
hbase(main):001:0> list TABLE ABC SYSTEM.CATALOG SYSTEM.SEQUENCE SYSTEM.STATS TEST TEST01 6 row(s) in 2.6710 seconds
2016-06-26 00:16:09,713 ERROR [RS_OPEN_REGION-bigdata-server-1:16020-0] handler.OpenRegionHandler: Failed open of region=SYSTEM.CATALOG,,1466870656182.03e30d1b07989af24fcbcb1173fcfd91., starting to roll back the global memstore size. java.io.IOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded ........... 2016-06-26 00:16:15,153 INFO [regionserver/server1/192.168.2.201:16020] regionserver.HRegionServer: regionserver/server1/192.168.2.201:16020 exiting 2016-06-26 00:16:15,153 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: HRegionServer Aborted ...........