2019-12-06/2020~報(bào)錯(cuò)總結(jié)

1.root權(quán)限修改文件,提示W(wǎng)arning: Changing a readonly file

在linux上編輯文件的時(shí)候,明明是使用的root登錄的,可是這種至高無(wú)上的權(quán)限在按下i的時(shí)候被那串紅色錯(cuò)誤褻瀆了W10: Warning: Changing a readonly file。

困擾兩天后,終于靈光一閃,奇跡的解決了這個(gè)問(wèn)題,那就是:

修改完成后使用:wq! 強(qiáng)制保存退出!!!!

2.pd.read_csv(文件)讀取文件,中文亂碼

使用python的pandas包,加載csv文件,csv文件數(shù)據(jù)里有中文存在
加載代碼如下:

import pandas as pd

data = pd.read_csv("C:\\Users\\Administrator\\Desktop\\work\\data\\user20191206.csv", encoding='utf-8', header=None, index_col=False, names=['user_id', 'registe_type', 'nickname'
, 'phone', 'sex', 'behavior_labels', 'last_login_source', 'wechat_tags', 'createtime', 'last_login_time', 'is_subject', 'black', 'is_indemnify', 'ban_speaking',
'sale_false', 'is_new', 'is_audit', 'is_evil', 'is_forbid','is_subscribe', 'last_msg_id', 'user_type', 'black_room'])

print(data.head())

第一次執(zhí)行,沒(méi)有給encoding報(bào)錯(cuò),后來(lái)給了encoding=utf-8,又報(bào)錯(cuò),百度說(shuō)把utf-8改成ISO-8859-1,改后果然
不報(bào)錯(cuò)了,但是出現(xiàn)了中文亂碼的問(wèn)題。
其實(shí)問(wèn)題是出現(xiàn)在文件本身,它本身不是utf-8編碼的,所以就去把csv文件通過(guò)notepad++方式打開,并且把編碼轉(zhuǎn)
換成utf-8,并且把代碼的encoding改回utf-8,重新運(yùn)行代碼,結(jié)果正常了。

3.讀取csv文件,報(bào)錯(cuò)pandas.errors.ParserError: Error tokenizing data. C error: Expected 40 fields in line 1389, saw 41

在括號(hào)里添加 sep='\t'的參數(shù)問(wèn)題得到解決:
line = pd.read_csv(file_path, sep='\t')

4.python讀取csv文件,中文讀成了字節(jié)

原因:代碼里設(shè)置了rb模式
with open(infile, 'rb') as fd:
解決:
1.將csv用notepad++將編碼轉(zhuǎn)碼成utf-8
2.rb改為r

5.python寫入csv文件,中文寫成了字節(jié)

原因:代碼里設(shè)置encoding參數(shù)為'GBK'
解決辦法:encoding改為utf-8
ofile = open(out_path, 'w', encoding='utf-8')

6.redis 異常 redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to

在linux Centos環(huán)境下,Java連接redis發(fā)出的異常信息如下:
redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots,
but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check
Redis logs for details about the error.
解決辦法:
redis 安裝目錄下 找到 redis-cli.EXE 連接到服務(wù)器后執(zhí)行以下命令:
config set stop-writes-on-bgsave-error no
執(zhí)行結(jié)果:
ok.

7.遍歷List時(shí)做remove操作異常java.lang.UnsupportedOperationException

原代碼:

List<String> req_itemids= Arrays.asList(items.split("_"));
for (int i = 1; i < req_itemids.size(); i++) {
                  Float score = Float.parseFloat(req_itemids.get(i).split(":")[1]);
                    guiyi_sum+=Math.pow(score,2);
                    if(score<0.5){
                        req_itemids.remove(i--);
                    }
}

報(bào)錯(cuò)原因:
調(diào)用Arrays.asList()生產(chǎn)的List的add、remove方法時(shí)報(bào)異常,這是由Arrays.asList() 返回的市Arrays的內(nèi)部類ArrayList, 而不是java.util.ArrayList。Arrays的內(nèi)部類ArrayList和java.util.ArrayList都是繼承AbstractList,remove、add等方法AbstractList中是默認(rèn)throw UnsupportedOperationException而且不作任何操作。java.util.ArrayList重新了這些方法而Arrays的內(nèi)部類ArrayList沒(méi)有重寫,所以會(huì)拋出異常。解決方法如下:

List<String> list = Arrays.asList(items.split("_"));
req_itemids = new ArrayList<>(list);
for (int i = 1; i < req_itemids.size(); i++) {
                    Float score = Float.parseFloat(req_itemids.get(i).split(":")[1]);
                    guiyi_sum+=Math.pow(score,2);
                    if(score<0.5){
                        req_itemids.remove(i--);
                    }
}
8.使用比較器對(duì)list的對(duì)象排序報(bào)錯(cuò)java.lang.IllegalArgumentException: Comparison method violates its general contract!

報(bào)錯(cuò)原因:
被排序的實(shí)現(xiàn)了Comparable接口的對(duì)象,在重寫compareTo方法中,比較大小時(shí),只返回1和-1,沒(méi)有返回0.
原代碼:

@Override
        public int compareTo(IS is) {
            Double cha = is.score - this.score;
            if(cha>0){
                return 1;
            }else {
                return -1;
            }
        }

糾正后代碼:

@Override
        public int compareTo(IS is) {
            Double cha = is.score - this.score;
            if(cha>0){
                return 1;
            }else if(cha==0) {
                return 0;
            }else {
                return -1;
            }
        }
9.在c++的接口代碼中執(zhí)行./server進(jìn)行編譯報(bào)錯(cuò)

操作及報(bào)錯(cuò)如下圖:


解決辦法:
臨時(shí)解決方法:
執(zhí)行以下命令
[root@localhost gen-cpp]# export LD_LIBRARY_PATH=/usr/local/lib
徹底修改方法:
修改/root/.bash_profile文件:

  1. 在其中添加export LD_LIBRARY_PATH=/usr/local/lib
  2. source .bashrc (使生效)
10.用spawn-fcgi工具托管自主開發(fā)的cgi demo報(bào)錯(cuò):spawn-fcgi: child exited with: 127

開發(fā)好了cgi的test文件后,編譯成功,但是將文件加入cgi托管工具卻報(bào)錯(cuò),報(bào)錯(cuò)如下圖所示:



解決辦法:
從最初執(zhí)行./test的問(wèn)題找起,發(fā)現(xiàn)執(zhí)行./test命令就報(bào)錯(cuò):
發(fā)現(xiàn)報(bào)錯(cuò)和上一個(gè)錯(cuò)誤一模一樣,于是用上一個(gè)報(bào)錯(cuò)問(wèn)題的解決辦法:
export LD_LIBRARY_PATH=/usr/local/lib
這樣就可以執(zhí)行bin文件了,再次加入cgi托管,發(fā)現(xiàn)也沒(méi)有報(bào)錯(cuò)了


11.執(zhí)行jar包報(bào)錯(cuò):無(wú)法加載類,或者運(yùn)行出現(xiàn)invalid or corrupt jarfile,或者找不到依賴的類。

解決方法:
Java使用idea新建maven項(xiàng)目打jar包并執(zhí)行方法
1.新建maven項(xiàng)目
打開idea,直接從file新創(chuàng)建一個(gè)maven項(xiàng)目,傻瓜式的創(chuàng)建,按默認(rèn)步驟完成,我的新建項(xiàng)目如下圖:


2.打jar的配置:
在pom文件中加入以下配置:

<build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <addClasspath>true</addClasspath>
                            <useUniqueVersions>false</useUniqueVersions>
                            <classpathPrefix>lib/</classpathPrefix>
                            <mainClass>cn.mymaven.test.TestMain</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>

其中<mainClass>可填可不填,填的話寫自己的Java文件包名。
3.打jar包:
直接在idea工具下雙擊package即可生成jar包,如下圖:



4.執(zhí)行,直接打開命令行,在jar包當(dāng)前目錄下執(zhí)行命令:java -jar XXX.jar,或者輸入命令:java -cp XXX.jar 包名.類名即可執(zhí)行。
5.但是,這樣的配置還不滿足第三方依賴包的接入,所以需要進(jìn)一步修改pom配置文件,修改很容易,只需要增加幾行配置即可,配置如下:

<plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>cn.mymaven.test.Producer</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
            </plugin>

配置完成后,idea的maven窗口會(huì)出現(xiàn)如下圖所示插件:



雙擊上面的紅框插件即可完成打包
這樣,項(xiàng)目所依賴的包就會(huì)被打進(jìn)來(lái),可以直接使用第4步驟的命令執(zhí)行。

12.執(zhí)行python文件報(bào)錯(cuò):AttributeError: 'module' object has no attribute 'SSLContext'

報(bào)錯(cuò)詳細(xì)如下圖所示:



解決辦法:
之前安裝pika的命令是python -m pip install pika --upgrade,現(xiàn)在執(zhí)行pip install pika==0.11.0,安裝這個(gè)低版本的rabbitmq就不報(bào)錯(cuò)了,可能是因?yàn)橹鞍惭b最新版本的庫(kù)和python2.7版本不相容的緣故。

13.python安裝庫(kù)報(bào)錯(cuò):ERROR: Package 'more-itertools' requires a different Python: 2.7.13 not in '>=3.5'

報(bào)錯(cuò)如下圖所示:



安裝到最后一步,卻報(bào)錯(cuò)說(shuō)more-itertools安裝需要的python版本需要大于或等于3.5
解決辦法:
對(duì)問(wèn)題信息對(duì)癥下藥,更換一個(gè)低一些的more-itertools庫(kù)的版本,更換安裝命令:
]# pip install more-itertools==5.0.0
安裝成功,繼續(xù)回到前面的安裝,再次執(zhí)行安裝命令,成功。

14.執(zhí)行start-hbase.start命令啟動(dòng)hbase進(jìn)入終端執(zhí)行status報(bào)錯(cuò):ERROR: Can't get master address from ZooKeeper; znode data == null

背景:先啟動(dòng)Hadoop,再啟動(dòng)zookeeper后jps查看進(jìn)程都沒(méi)有問(wèn)題,最后啟動(dòng)Hbase,發(fā)現(xiàn)master節(jié)點(diǎn)的HBASE進(jìn)程有時(shí)候啟動(dòng)有,有時(shí)候啟動(dòng)后又沒(méi)有,slave1和slave2都有HRegionServer的進(jìn)程,并且發(fā)現(xiàn)訪問(wèn)http://master:60010/master-status
也訪問(wèn)不了。
解決辦法,進(jìn)入/usr/local/src/hbase-0.98.6-hadoop2/logs目錄下,執(zhí)行命令
]# cat hbase-root-master-master.log
發(fā)現(xiàn)了問(wèn)題,報(bào)錯(cuò)如下:


百度搜索問(wèn)題后,一下子就明白了,原來(lái)是自己之前重裝了hbase,重裝hbase沒(méi)問(wèn)題,問(wèn)題是zookeeper還保留著之前hbase的信息。
解決辦法:
進(jìn)入zookeeper的bin目錄下,執(zhí)行sh zkCli.sh
進(jìn)入zookeeper客戶端后,再執(zhí)行
] rmr /hbase
再quit退出
退出后重啟hbase,一切正常,問(wèn)題得到解決。

15.Linux解決Device eth0 does not seem to be present,delaying initialization問(wèn)題,并且配置橋接模式的網(wǎng)絡(luò)

問(wèn)題描述
在VirtualBox中克隆Linux服務(wù)器,如下,由CentOS6.5_Base克隆得到Hadoop01服務(wù)器,采用的是完全克隆的方式,克隆時(shí)重新初始化MAC地址。

原服務(wù)器Centos6.5_Base的IP地址是192.168.137.10,原本打算是:將克隆得到的服務(wù)器Hadoop01的IP地址設(shè)置成192.168.1.110。

那么很自然的,當(dāng)我啟動(dòng)Hadoop01之后,想到的就是要去修改/etc/sysconfig/network-script目錄下的網(wǎng)絡(luò)接口配置文件ifcfg-ethXXX,將文件中的IP修改為192.168.1.110。
修改如下:


接著使用service network restart命令重啟網(wǎng)絡(luò)報(bào)錯(cuò)如下:



解決方法:
使用ifconfig -a命令。



如上圖,可以看到目前服務(wù)器所擁有的是eth1這個(gè)網(wǎng)卡(且對(duì)應(yīng)的mac地址是08:00:27:93:B8:C2),而我們的配置文件ifcfg-eth0中給網(wǎng)卡配置的名稱卻是eth0。這是不對(duì)的,下面我們改過(guò)來(lái)。

重啟網(wǎng)絡(luò)服務(wù),成功,如下圖:



這個(gè)問(wèn)題解決后,就可以進(jìn)行基于橋接模式的網(wǎng)絡(luò)配置了
1.首先開機(jī)前在VMware虛擬機(jī)上設(shè)置網(wǎng)絡(luò)為橋接模式,如下:

2.開機(jī)后,先臨時(shí)性設(shè)置虛擬機(jī)ip地址:ifconfig eth2 192.168.8.110,在/etc/hosts文件中配置本地ip到host的映射(注意,192.168.1.110這個(gè)地址是依據(jù)在windows上顯示的IPv4地址的網(wǎng)段來(lái)編寫,如下圖)

3.在windows上ping此ip,正常ping通
4.能ping通后就可以通過(guò)putty或其他客戶端連接虛擬機(jī),進(jìn)行網(wǎng)絡(luò)配置文件的編寫,如下:
]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.8.110
NETMASK=255.255.255.0
GATEWAY=192.168.8.1

編寫完成,重啟網(wǎng)卡:service network restart
5.關(guān)閉防火墻:
service iptables stop
service ip6tables stop
service iptables status
service ip6tables status

chkconfig iptables off
chkconfig ip6tablesoff

vi /etc/selinux/config
SELINUX=disabled

在win7的控制面板中,關(guān)閉windows的防火墻!如果不關(guān)閉防火墻的話,就怕,可能win7和虛擬機(jī)直接無(wú)法ping通!
6.編寫DNS配置文件,以便能訪問(wèn)網(wǎng)絡(luò)
]# vi /etc/resolv.conf
nameserver 8.8.8.8

6.測(cè)試:curl www.baidu.com,測(cè)試結(jié)果如下:


說(shuō)明此時(shí)網(wǎng)絡(luò)配置已經(jīng)完成!

16.在單機(jī)的centos上自帶zookeeper的kafka,啟動(dòng)kafka報(bào)錯(cuò):The Cluster ID doesn't match stored clusterId Some...

解決辦法,根據(jù)日志提示,兩個(gè)cluster.id不一致。
修改vim config/server.properties:
將cluster.id改為日志上說(shuō)不匹配的id,修改完后再次啟動(dòng)kafka,成功!

17.idea中的maven項(xiàng)目pom文件的依賴包報(bào)錯(cuò):Dependency XXX not found

如下圖所示:



這個(gè)jar包在maven的倉(cāng)庫(kù)中可以找到的,但還是報(bào)找不到包的錯(cuò)誤,經(jīng)過(guò)一段時(shí)間的了解后,主要和幾個(gè)文件有關(guān):
(1)maven的settings.xml文件

<mirror>  
      <id>alimaven</id>  
      <name>aliyun maven</name>  
      <url>http://maven.aliyun.com/nexus/content/groups/public/</url>  
      <mirrorOf>*</mirrorOf>      //最好不要*,這就表示所有的倉(cāng)庫(kù)只使用aliyun的鏡像,實(shí)際是aliyun只鏡像了central,所以這里寫成central
</mirror>  

(2)以我們上面的為例,scala-library在maven repository可以查到,但查找的結(jié)果如下所示(注意紅框,它表示這個(gè)jar在哪個(gè)倉(cāng)庫(kù)中):



上面明確表示,這個(gè)jar包在Central的倉(cāng)庫(kù),而不是在常見的maven2中
(3)解決方法
原因都知道了,就好辦了。一種是在settings.xml中添加對(duì)應(yīng)的倉(cāng)庫(kù),另一種是在pom.xml直接添加額外的倉(cāng)庫(kù)。推薦第二種,如下(在</project>之前添加):

<repositories>
    <repository>
        <id>JBoss repository</id>
        <url>https://repository.jboss.org/nexus/content/repositories/releases/</url>
    </repository>
</repositories>
18.yum安裝ntp報(bào)錯(cuò)

在master節(jié)點(diǎn)執(zhí)行安裝命令,yum install ntp -y安裝成功,在slave節(jié)點(diǎn)安裝卻失敗,報(bào)錯(cuò)結(jié)果如下:



解決辦法:依次分別執(zhí)行:
yum clean all
yum distro-sync
yum update
再次安裝,成功。

19.maven打包報(bào)錯(cuò)java.lang.StackOverflowError解決方法

在maven項(xiàng)目打包的時(shí)候報(bào)錯(cuò),java.lang.StackOverflowError
解決方法在setting->maven->runner->VM Options中添加 -Xss4096k 如下圖所示


20.hive作業(yè)運(yùn)行內(nèi)存溢出

is running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical memory used; 8.9 GB of 2.1 GB virtual memory used. Killing container.
解決辦法:
1.修改hive配置:增加hive-site.xml的內(nèi)存配置

<property>
      <name>mapred.child.java.opts</name>
      <value>-Xmx7006m</value>
</property> 
  1. 修改Hadoop配置:mapreduc-site.xml設(shè)置mapreduce的內(nèi)存分配大小
    當(dāng)運(yùn)行中出現(xiàn)Container is running beyond physical memory這個(gè)問(wèn)題出現(xiàn)主要是因?yàn)槲锢韮?nèi)存不足導(dǎo)致的,在執(zhí)行mapreduce的時(shí)候,每個(gè)map和reduce都有自己分配到內(nèi)存的最大值,當(dāng)map函數(shù)需要的內(nèi)存大于這個(gè)值就會(huì)報(bào)這個(gè)錯(cuò)誤,解決方法:
<property>
      <name>mapreduce.map.memory.mb</name>
      <value>2048</value>
</property>
  1. 修改Hadoop配置:yarn-site.xml設(shè)置內(nèi)存
    當(dāng)運(yùn)行中提示running beyond virtual memory limits. Current usage: 32.1mb of 1.0gb physical memory used; 6.2gb of 2.1gb virtual memory used. Killing container。
    該錯(cuò)誤是YARN的虛擬內(nèi)存計(jì)算方式導(dǎo)致,上例中用戶程序申請(qǐng)的內(nèi)存為1Gb,YARN根據(jù)此值乘以一個(gè)比例(默認(rèn)為2.1)得出申請(qǐng)的虛擬內(nèi)存的 值,當(dāng)YARN計(jì)算的用戶程序所需虛擬內(nèi)存值大于計(jì)算出來(lái)的值時(shí),就會(huì)報(bào)出以上錯(cuò)誤。調(diào)節(jié)比例值可以解決該問(wèn)題。具體參數(shù)為:yarn-site.xml 中的yarn.nodemanayger.vmem-check-enabled
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
21.hive元數(shù)據(jù)服務(wù)啟動(dòng)失敗

解決辦法:
分析,/usr/local/src/hive/lib中的jar有問(wèn)題,版本低導(dǎo)致
下載disruptor-3.4.1.jar,替換掉原來(lái)的disruptor-3.3.0.jar 即可正常啟動(dòng)元數(shù)據(jù)服務(wù)。

22.cloudera-scm-server啟動(dòng)失敗

cloudera-scm-server安裝完成后啟動(dòng)server:
systemctl start cloudera-scm-server
沒(méi)有生成日志,通過(guò)狀態(tài)查看:
systemctl status cloudera-scm-server
報(bào)錯(cuò)如下:

[root@master cloudera]# systemctl status cloudera-scm-server
● cloudera-scm-server.service - Cloudera CM Server Service
   Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-server.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since 五 2022-03-18 12:14:18 CST; 5min ago
  Process: 5415 ExecStart=/opt/cloudera/cm/bin/cm-server (code=exited, status=1/FAILURE)
  Process: 5412 ExecStartPre=/opt/cloudera/cm/bin/cm-server-pre (code=exited, status=0/SUCCESS)
 Main PID: 5415 (code=exited, status=1/FAILURE)

3月 18 12:14:18 master systemd[1]: start request repeated too quickly for cloudera-scm-server.service
3月 18 12:14:18 master systemd[1]: Failed to start Cloudera CM Server Service.
3月 18 12:14:18 master systemd[1]: Unit cloudera-scm-server.service entered failed state.
3月 18 12:14:18 master systemd[1]: cloudera-scm-server.service failed.
3月 18 12:14:45 master systemd[1]: start request repeated too quickly for cloudera-scm-server.service
3月 18 12:14:45 master systemd[1]: Failed to start Cloudera CM Server Service.
3月 18 12:14:45 master systemd[1]: cloudera-scm-server.service failed.
3月 18 12:15:48 master systemd[1]: start request repeated too quickly for cloudera-scm-server.service
3月 18 12:15:48 master systemd[1]: Failed to start Cloudera CM Server Service.
3月 18 12:15:48 master systemd[1]: cloudera-scm-server.service failed. 

解決辦法:

[root@master cloudera]# mkdir -p /usr/java
[root@master cloudera]# echo $JAVA_HOME
/usr/local/src/jdk1.8.0_144
[root@master cloudera]# ln -s /usr/local/src/jdk1.8.0_144 /usr/java/default

再次啟動(dòng)server,啟動(dòng)成功,日志打印正常:

[root@master java]# systemctl status cloudera-scm-server
● cloudera-scm-server.service - Cloudera CM Server Service
   Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-server.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2022-03-18 12:24:03 CST; 5min ago
  Process: 5510 ExecStartPre=/opt/cloudera/cm/bin/cm-server-pre (code=exited, status=0/SUCCESS)
 Main PID: 5512 (java)
   CGroup: /system.slice/cloudera-scm-server.service
           └─5512 /usr/java/default/bin/java -cp .:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:lib/* -serv...

3月 18 12:24:03 master systemd[1]: Starting Cloudera CM Server Service...
3月 18 12:24:03 master systemd[1]: Started Cloudera CM Server Service.
3月 18 12:24:03 master cm-server[5512]: JAVA_HOME=/usr/java/default
3月 18 12:24:03 master cm-server[5512]: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
3月 18 12:24:07 master cm-server[5512]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system proper...tion logging.
3月 18 12:24:17 master cm-server[5512]: 12:24:17.894 [main] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Table 'scm.CM_VERSION' doesn't exist
Hint: Some lines were ellipsized, use -l to show in full.

日志:

[root@master cloudera]# tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
2022-03-18 12:24:46,099 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Built CSD-based test descriptor FAILED_DATA_DIRS with scope KUDU-KUDU_TSERVER
2022-03-18 12:24:46,099 WARN main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Duplicate health test KUDU-6.1.0-FAILED_DATA_DIRS from CSD KUDU6_1-6.3.1.
2022-03-18 12:24:46,100 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Registered all CSD-based health tests for KUDU from CSD KUDU6_1-6.3.1
2022-03-18 12:24:46,100 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Built CSD-based test descriptor FULL_DATA_DIRS with scope KUDU-KUDU_MASTER
2022-03-18 12:24:46,100 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Built CSD-based test descriptor FAILED_DATA_DIRS with scope KUDU-KUDU_MASTER
2022-03-18 12:24:46,100 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Built CSD-based test descriptor FULL_DATA_DIRS with scope KUDU-KUDU_TSERVER
2022-03-18 12:24:46,100 WARN main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Duplicate health test KUDU-6.2.0-FULL_DATA_DIRS from CSD KUDU6_2-6.3.1.
2022-03-18 12:24:46,101 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Built CSD-based test descriptor FAILED_DATA_DIRS with scope KUDU-KUDU_TSERVER
2022-03-18 12:24:46,101 WARN main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Duplicate health test KUDU-6.2.0-FAILED_DATA_DIRS from CSD KUDU6_2-6.3.1.
2022-03-18 12:24:46,101 INFO main:com.cloudera.cmon.kaiser.csd.CsdInfoBasedHealthTestDescriptors: Registered all CSD-based health tests for KUDU from CSD KUDU6_2-6.3.1
2022-03-18 12:24:46,581 INFO main:com.cloudera.server.cmf.HeartbeatRequester: Eager heartbeat initialized 
23.flink作業(yè)在standalone提交運(yùn)行報(bào)錯(cuò)

異常信息:

2022-03-20 00:41:23
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:491)
    at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389)
    at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292)
    at org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:64)
    at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:501)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:302)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:277)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:257)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:250)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:240)
    at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.<init>(OneInputStreamTask.java:65)
    at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1373)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:700)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
    at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58)
    at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487)
    ... 17 more

解決方法:
將主節(jié)點(diǎn)的Hadoop依賴jar包:flink-shaded-hadoop-2-uber-2.7.5-10.0.jar
分發(fā)到從節(jié)點(diǎn)中,重啟start-cluster.sh,再次提交jar任務(wù)正常運(yùn)行。

24.執(zhí)行Hadoop的mapreduce的jar包報(bào)錯(cuò)

運(yùn)行命令:
[root@bigdata101 hadoop]# hadoop jar /usr/local/src/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output1
報(bào)錯(cuò)顯示:

2022-03-22 14:39:47,825 INFO mapreduce.Job:  map 0% reduce 0%
2022-03-22 14:39:47,862 INFO mapreduce.Job: Job job_1647930942134_0004 failed with state FAILED due to: Application application_1647930942134_0004 failed 2 times due to AM Container for appattempt_1647930942134_0004_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2022-03-22 14:39:47.108]Exception from container-launch.
Container id: container_1647930942134_0004_02_000001
Exit code: 1

[2022-03-22 14:39:47.112]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
錯(cuò)誤: 找不到或無(wú)法加載主類 org.apache.hadoop.mapreduce.v2.app.MRAppMaster


[2022-03-22 14:39:47.113]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
錯(cuò)誤: 找不到或無(wú)法加載主類 org.apache.hadoop.mapreduce.v2.app.MRAppMaster


For more detailed output, check the application tracking page: http://bigdata101:8088/cluster/app/application_1647930942134_0004 Then click on links to logs of each attempt.
. Failing the application.
2022-03-22 14:39:47,905 INFO mapreduce.Job: Counters: 0 

解決辦法:
在主機(jī)中運(yùn)行:
hadoop classpath

記下返回的結(jié)果
vi $HADOOP_HOME/etc/hadoop/yarn-site.xml
添加一個(gè)配置

<property>
        <name>yarn.application.classpath</name>
        <value>hadoop classpath返回信息</value>
</property>

重啟yarn

25.執(zhí)行Hadoop的mapreduce的jar包出現(xiàn) name node is in safe mode 問(wèn)題

解決方法:
1.進(jìn)入hadoop安裝根目錄
執(zhí)行
cd /usr/local/hadoop
bin/hadoop dfsadmin -safemode leave

26.啟動(dòng)flume消費(fèi)kafka數(shù)據(jù)傳入hdfs jar包沖突問(wèn)題

啟動(dòng)命令:

[root@bigdata102 lib]# /usr/local/src/apache-flume-1.9.0-bin/bin/flume-ng agent --conf-file /usr/local/src/apache-flume-1.9.0-bin/conf/kafka-flume-hdfs.conf --name a1 -Dflume.root.logger=INFO,LOGFILE            

報(bào)錯(cuò)異常:

java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
        at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:221)
        at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:572)
        at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:412)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
        at java.lang.Thread.run(Thread.java:748)
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1679)
        at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:221)
        at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:572)
        at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:412)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
        at java.lang.Thread.run(Thread.java:748) 

報(bào)錯(cuò)原因:
Hadoop中的guava-27.0-jre.jar和flume下的guava jar包發(fā)生了沖突。
解決辦法:
移除flume下的guava jar包,將Hadoop下的guava jar拷貝過(guò)來(lái)即可

[root@bigdata102 lib]# cp /usr/local/src/hadoop-3.1.3/share/hadoop/hdfs/lib/guava-27.0-jre.jar ./                                                                                                                  

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容