0 0 0 0

Hadoop server monitoring urls hadoop commands

Use accounts fromsshLogin, enterHadoop_homeDirectory.

hadoopInstall directory:/usr/lib/hadoop/


ExecuteSh bin/start-all. Sh.

Or or or


· Execute sh bin/stop-all. Sh.

1View the contents of the specified directory

Hadoop dfs -ls [File directory]

Eg: hadoop dfs -ls/user/wangkai. pt

2Open an existing file

Hadoop dfs -cat [ file_path ]

Eg: hadoop dfs -cat/user/wangkai. pt/data. Txt.

3Save local files tohadoop

Hadoop fs -put [Local address] [ hadoopDirectory]

Hadoop fs -put/home/t/file.txt/user/t

( file. Txt.Is the file name)

4To store a local folderhadoop

Hadoop fs -put [Local directory] [ hadoopDirectory]
Hadoop fs -put/home/t/dir_name/user/t

( dir_name )Is a folder name)

5hadoopPrevious filedownUnder the existing directory

Hadoop fs -get [File directory] [Local directory]

Hadoop fs -get/user/t/ok.txt/home/t

6Delete, deletehadoopSpecify files

Hadoop fs -rm [File address]

Hadoop fs -rm/user/t/ok. txt

7Delete, deletehadoopSpecify folde & ( including subdirectories, etc. ).

Hadoop fs -rm [Directory address]

Hadoop fs -rmr/user/t

8In.hadoopCreate a new directory within a directory

Hadoop fs -mkdir/user/t

9In.hadoopCreate a new empty file under the specified directory


Hadoop fs -touchz/user/new. txt

10hadoopRename a file


Hadoop fs -mv/user/test.txt/user/ok. txt( willTest. Txt.Rename toOk. Txt.)

11hadoopSpecifies that all content in the directory is saved as a file, while at the same timedownTo the ground.

Hadoop dfs -getmerge/user/home/t

12Will be running.hadoopJobkillDrop off.

Hadoop job -kill [ job-id ]

zookeeper commands

ZooKeeperService command:

After preparing the appropriate configuration, you can directly pass.ZkServer. Sh.Related operations for this script

  • 1.Start.ZKServicesh bin/zkServer. Sh start.
  • 2.View viewZKService statussh bin/zkServer. Sh status.
  • 3.Stop.ZKServicesh bin/zkServer. Sh stop.
  • 4.RebootZKServicesh bin/zkServer. Sh restart.

ZooKeeperAccess, data creation, data modification, etc..UseZkCli. Sh-server 2181.Connect toZooKeeperService, the system will output after successful connectionZooKeeperRelated environments and configuration information.

Some simple operatio & for tools are as follows:

  • 1.Under root directory, file: Ls/UselsCommand to view currentZooKeeperContent contained in
  • 2.Under root directory, file: Ls2/View the current node data and see the number of updates.
  • 3.Create a file and set the initial content: Create/zk & test"Create a new oneznodeNodes"zk."".And the string associated with it
  • 4.Get the contents of the file:Get/zkConfirmznodeWhether the string we created is included
  • 5.Modify the contents of the file:Set/zk & zkbak"Yes.zkThe associated string is set
  • 6.Delete file:Delete/zkJust create it.znodeDelete
  • 7.Exit client:quit
  • 8.Help command:help

ZooKeeperCommon commands

ZooKeeperSupport for some specific command lette &. Most of them are the query commands.ZooKeeperCurrent state of service and related information. Users can pass through the clienttelnetOr or orncGo toZooKeeperSubmit the appropriate commands

  • 1.Command:Echo stat | nc 2181To see which node is selected asfollowerOr orleader
  • 2.UseEcho ruok | nc 2181Test whether this is startedServerIf you respond,imokIndicates that it has been started.
  • 3. Echo dump | nc 2181.List unused sessio and temporary nodes.
  • 4. Echo kill | nc 2181.Turn off.server
  • 5. Echo conf | nc 2181.Export details of the related service configuration.
  • 6. Echo cons | nc 2181.Lists full connectio & to clients that are connected to the server/Details of the session.
  • 7. Echo ENVI | nc 2181.Output details about the service environment ( differ from theconfCommand ).
  • 8. Echo reqs | nc 2181.List requests.
  • 9. Echo wchs | nc 2181.List serverwatchDetails.
  • 10. Echo wchc | nc 2181.Through throughsessionList serverwatchFor details, its output is awatchList of related sessio.
  • 11. Echo wchp | nc 2181.List server through pathwatchDetails. It outputs one and more.sessionRelated paths.

Common commands for hbaseshell

( 1 ) create a table scores with two column family grad and courese

hbase(main):001:0> create 'scores','grade', 'course

You can use the list command to see what tables are in the current hbase. Use the describe command to view the table structure. ( remember that all of the results indicate that the column names need to be quoted ).

( 2 ) insert values by design table structure

put 'scores','Tom','grade:','5′
put 'scores','Tom','course:math','97′
put 'scores','Tom','course:art','87′
put 'scores','Jim','grade','4′
put 'scores','Jim','course:','89′
put 'scores','Jim','course:','80′ 

This table structure is up, and it's free, and the column family can freely add columns. If the column family doesn't have a child column, add a colon.
A put command is simpler, with only this one:
Hbase> put 't1′, 'r1′, 'c1′, 'value ', ts1
T1 refe & to the table, value refe & to the row name, c1, and the value. Ts1 refers to a time stamp, usually omitted.

( 3 ) query data based on key values

get 'scores','Jim'
get 'scores','Jim','grade' 

You may find a rule, the shell operation of hbase, an approximate order that's followed by a table, row name, column name, and so on, if there are other conditions.
A get is used as follows:

hbase> get 't1′, 'r1′
hbase> get 't1′, 'r1′, {TIMERANGE => [ts1, ts2]}
hbase> get 't1′, 'r1′, {COLUMN => 'c1′}
hbase> get 't1′, 'r1′, {COLUMN => ['c1', 'c2', 'c3']}
hbase> get 't1′, 'r1′, {COLUMN => 'c1′, TIMESTAMP => ts1}
hbase> get 't1′, 'r1′, {COLUMN => 'c1′, TIMERANGE => [ts1, ts2], VERSIONS => 4}
hbase> get 't1′, 'r1′, {COLUMN => 'c1′, TIMESTAMP => ts1, VERSIONS => 4}
hbase> get 't1′, 'r1′, 'c1′
hbase> get 't1′, 'r1′, 'c1′, 'c2′
hbase> get 't1′, 'r1′, ['c1', 'c2']

( 4 ) scan all data
Specify some modifiers: timerange, filter, limit, startrow, stoprow, timestamp, maxlength, or columns. No modifiers, the above example, will show all the rows.
The code is as follows:

hbase> scan '.META.'
hbase> scan '.META.', {COLUMNS => 'info:regioninfo'}
hbase> scan 't1′, {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1′, {COLUMNS => 'c1′, TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1′, {FILTER =>"(PrefixFilter ('row2′) AND (QualifierFilter (>=, 'binary:xyz'))) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1′, {FILTER =>, 0)}

( 5 ) delete the specified data

The code is as follows:

delete 'scores','Jim','grade'
delete 'scores','Jim' 

Deleting a data command doesn't change too much, only one:

Hbase> delete 't1′, 'r1′, c1′, ts1, '
Another deleteall command can be used to delete the range of the entire row, with caution!
If you need to make a full table delete, use the truncate command, and there's no direct full table delete command, which is also the disable, drop, create three commands.
( 6 ) modify table structure

The code is as follows:

disable 'scores'
alter 'scores',NAME=>'info'
enable 'scores' 

The alter command uses the following ( which isn't a successful ve & ion ):

To change or add a column family:

hbase> alter 't1′, NAME => 'f1′, VERSIONS => 5 

B, delete a column family:

The code is as follows:

hbase> alter 't1′, NAME => 'f1′, METHOD => 'delete'
hbase> alter 't1′, 'delete' => 'f1′ 

C, you can also modify the table properties such as max

hbase> alter 't1′, METHOD => 'table_att', MAX_FILESIZE => '134217728′ 

D, you can add a table collaboration processor

hbase> alter 't1′, METHOD => 'table_att', 'coprocessor'=> 'hdfs:///foo.jar||1001|arg1=1,arg2=2′ 

Multiple collaborative processors can be configured on a table, and a sequence will grow automatically. Loading a collaborative processor ( which can be referred to as a filter ) requires the following rules:
[ coprocessor ] | name | [ priority ] | [
E, remove the coprocessor as follows:
Hbase> alter 't1′, method => 'table_att_unset ', name => & max_filesize '
Hbase> alter 't1′, method => 'table_att_unset ', name => 'coprocessor $ 1′
F, you can execute multiple alter commands at a time:

hbase> alter 't1′, {NAME => 'f1′}, {NAME => 'f2′, METHOD => 'delete'} 

( 7 ) number of statistics:

The code is as follows:

hbase> count 't1
hbase> count 't1′, INTERVAL => 100000
hbase> count 't1′, CACHE => 1000
hbase> count 't1′, INTERVAL => 10, CACHE => 1000 

Count is generally time-consuming and statistics are used with mapreduce, and statistics are cached by default 10 rows. Statistics interval default is 1000 lines ( interval ).

( 8 ) disable and enable operation

Many operations need to suspend the availability of tables first, such as alter operations on top, and delete tables. Disable all and enable all to operate more tables.

( 9 ) delete table
Stop the table 's usability and execute the delete command first.
Drop & t1′
More than a few common commands, all of hbase 's shell commands are listed below, and a few command groups are divided into a number of command groups that can be seen.
4. Hbase shell script.

Now that you're a shell command, you can also write all the hbase shell commands into a file that you want to execute all commands in the order that the linux shell script. Like writing a linux shell, write all the hbase shell commands in a file, and then execute the following command:

The code is as follows:

$ hbase shell test. Hbaseshell.

Easy to use.

phoenix configuration

Configure the phoenix step ( this is the version 1. 1. 2 ): 1, download the -zxvf -c/usr/local/phoenix/apache-phoenix-4. 8. 2-hbase-1. 1-bin. Tar. Gz 3, configure environment variable vi/etc/profile to add export phoenix_home =/usr/local/phoenix/apache-phoenix-4. 8. 2-hbase-1. 1-bin export =. Export =.

Copyright © 2011 Dowemo All rights reserved.    Creative Commons   AboutUs