Pregled:
Uporabite naslednji ukaz za preverjanje stanja delovanja Kafke. Kot sledi:
kafka.service Naloženo: naloženo (/usr/lib/systemd/system/kafka.service; Omogočeno; prednastavitev proizvajalca: onemogočena) Aktivno: neuspešno (Rezultat: izhodna koda) od sre 22. 9. 2021 ob 14:43:11 CST; Pred 1h 43 minutami Process: 7363 ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties (code=exited, status=1/FAILURE) Glavni PID: 7363 (koda=izhod, status=1/NEUSPEH)
Sep 22 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,295] WARN [ReplicaManager broker=1] Prenehanje serviranja replik v dir /tmp/kafka-logs ( kafka.server.ReplicaManager) 22. sep 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,298] OPOZORILO [GroupCoordinator 1]: Ni uspelo zapisati praznih metapodatkov za skupino KqBatchAna: To ni pravilno Koordinatorja. (kafka.co... upCoordinator) 22. sep 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,303] INFO [ReplicaFetcherManager na posredniku 1] Odstranjen fetcher for partitions HashSet(__consumer_offsets-22, __ consumer_offsets-30, ...-8, __consumer Sep 22 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,304] INFO [ReplicaAlterLogDirsManager na posredniku 1] Odstranjen fetcher for partitions HashSet(__consumer_ offseti-22, __consumer_offsets... fsets-8, __con Sep 22 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,378] OPOZORILO [ReplicaManager broker=1] Posrednik 1 je prenehal s fetcherjem za particije __consumer_offsets-22,__ consumer_offsets-30,__consumer_... fsets-21,__con 22. sep 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,379] OPOZORILO Prenehanje strežbe dnevnikov v dir /tmp/kafka-logs (kafka.log.LogManager) 22. sep 14:43:11 devops02 kafka-server-start.sh[7363]: [2021-09-22 14:43:11,386] NAPAKA Izklop posrednika, ker so vsi dnevniki v /tmp/kafka-logs odpovedali (kafka.log.LogManager) Sep 22 14:43:11 devops02 systemd[1]: kafka.service: glavni proces zaključen, code=exited, status=1/FAILURE Sep 22 14:43:11 devops02 systemd[1]: Enota kafka.service je vstopila v neuspešno stanje. Sep 22 14:43:11 devops02 systemd[1]: kafka.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Pojdite v Kafka dnevnik imenik /usr/local/kafka/logs, da si ogledate server.log dnevniške datoteke, kot sledi:
[2021-09-22 14:43:11,286] ERROR Napaka med vrtenjem log segmenta za __consumer_offsets-8 v dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.io.FileNotFoundException: /tmp/kafka-logs/__consumer_offsets-8/00000000000000000000.index (No such file or directory) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) na java.io.RandomAccessFile. <init>(RandomAccessFile.java:243) na kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.:182) na kafka.log.AbstractIndex.resize(AbstractIndex.:175) na kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.:241) na kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.:241) at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.:507) at kafka.log.Log.$anonfun$roll$8(Log.:2037) at kafka.log.Log.$anonfun$roll$8$adapted(Log.:2037) V. Option.foreach(Option.:437) at kafka.log.Log.$anonfun$roll$2(Log.:2037) na kafka.log.Log.roll(Log.:2453) na kafka.log.Log.maybeRoll (Log.:1988) at kafka.log.Log.append(Log.:1263) at kafka.log.Log.appendAsLeader(Log.:1112) at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.:1069) at kafka.cluster.Partition.appendRecordsToLeader(Partition.:1057) at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.:958) at.collection.Iterator$$anon$9.next(Iterator.:575) na.collection.mutable.Growable.addAll(Growable.:62) na.collection.mutable.Growable.addAll$(Growable.:57) na.collection.immutable.MapBuilderImpl.addAll(Map.:692) na.collection.immutable.Map$.from(Map.:643) at.collection.immutable.Map$.from(Map.:173) na.collection.MapOps.map(Map.:266) na.collection.MapOps.map$(Map.:266) na.collection.AbstractMap.map(Map.:372) at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.:946) at kafka.server.ReplicaManager.appendRecords(ReplicaManager.:616) at kafka.coordinator.group.GroupMetadataManager.storeGroup(GroupMetadataManager.:325) na kafka.coordinator.group.GroupCoordinator.$anonfun$onCompleteJoin$1(GroupCoordinator.:1206) at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.:227) na kafka.coordinator.group.GroupCoordinator.onCompleteJoin(GroupCoordinator.:1178) na kafka.coordinator.group.DelayedJoin.onComplete(DelayedJoin.:43) at kafka.server.DelayedOperation.forceComplete(DelayedOperation.:72) na kafka.coordinator.group.DelayedJoin.$anonfun$tryComplete$1(DelayedJoin.:38) at kafka.coordinator.group.GroupCoordinator.$anonfun$tryCompleteJoin$1(GroupCoordinator.:1172) at.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.:17) at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.:227) na kafka.coordinator.group.GroupCoordinator.tryCompleteJoin(GroupCoordinator.:1171) na kafka.coordinator.group.DelayedJoin.tryComplete(DelayedJoin.:38) at kafka.server.DelayedOperation.safeTryCompleteOrElse(DelayedOperation.:110) na kafka.server.DelayedOperationPurgatory.tryCompleteElseWatch(DelayedOperation.:234) at kafka.coordinator.group.GroupCoordinator.prepareRebalance(GroupCoordinator.:1144) at kafka.coordinator.group.GroupCoordinator.$anonfun$maybePrepareRebalance$1(GroupCoordinator.:1118) at.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.:18) at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.:227) at kafka.coordinator.group.GroupCoordinator.maybePrepareRebalance(GroupCoordinator.:1117) na kafka.coordinator.group.GroupCoordinator.removeMemberAndUpdateGroup(GroupCoordinator.:1156) at kafka.coordinator.group.GroupCoordinator.$anonfun$handleLeaveGroup$3(GroupCoordinator.:498) na.collection.immutable.List.map(List.:246) at kafka.coordinator.group.GroupCoordinator.$anonfun$handleLeaveGroup$2(GroupCoordinator.:470) at.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.:18) at kafka.coordinator.group.GroupMetadata.inLock(GroupMetadata.:227) at kafka.coordinator.group.GroupCoordinator.handleLeaveGroup(GroupCoordinator.:467) at kafka.server.KafkaApis.handleLeaveGroupRequest(KafkaApis.:1659) at kafka.server.KafkaApis.handle(KafkaApis.:180) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.:74) na java.lang.Thread.run(Thread.java:748) Napaka povzroča:Linux redno čisti datoteke v mapi /tmp, je mapa datotek Kafka privzeto shranjena/tmp/kafka-logszaradi česar se redno čisti, kar povzroča nenormalno delovanje programa.
Pod CentOS 7 obstajajo 3 sistemske storitve, povezane s čiščenjem:
systemd-tmpfiles-setup.service :Create Volatile Files and Directories systemd-tmpfiles-setup-dev.service:Create static device nodes in /dev systemd-tmpfiles-clean.service :Čiščenje začasnih imenikov Obstajajo tudi trije povezani profili, in sicer:
/etc/tmpfiles.d/*.conf
/run/tmpfiles.d/*.conf
/usr/lib/tmpfiles.d/*.conf Uporabite naslednji ukaz za ogled dnevnikov, povezanih s tmpfiles:
TMP imenik v/usr/lib/tmpfiles.d/tmp.confKonfiguracija datotek je prikazana na naslednji sliki:
# Ta datoteka je del systemd.
# # systemd je prosta programska oprema; Lahko ga ponovno distribuirate in/ali spreminjate # po pogojih licence GNU Lesser General Public License, kot jo je objavil # Fundacija za prosto programsko opremo; bodisi različica 2.1 licence, ali # (po vaši izbiri) Katerakoli kasnejša različica.
# Za podrobnosti glejte tmpfiles.d(5)
# Počisti tmp imenike ločeno, da jih lažje preglasiš v /tmp 1777 koren 10d v /var/tmp 1777 koren 30d
# Izključi montažne točke v imenskem prostoru, ustvarjene s PrivateTmp=da x /tmp/systemd-private-%b-* X /tmp/systemd-private-%b-*/tmp x /var/tmp/systemd-private-%b-* X /var/tmp/systemd-private-%b-*/tmp
Rešitev 1
Na primer, spremenite konfiguracijsko datoteko Kafke /config/server.properties za spremembo konfiguracije log.dirs:
Rešitev 2
Dodajte mapo izključitev in uredite datoteko: /usr/lib/tmpfiles.d/tmp.conf
(Konec)
|