.. _controlcenter_troubleshooting: Troubleshooting |c3-short| ****************************** Common issues ============= Installing and Setup ^^^^^^^^^^^^^^^^^^^^ If you encounter issues during installation and setup, you can try these solutions. ^^^^^^^^^^^^^^^^^^^^^^^^^^ Bad security configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^ * Check the security configuration for all brokers, metrics reporter, client interceptors, and |c3-short| (see `debugging check configuration <#check-configurations>`_). For example, is it SASL_SSL, SASL_PLAINTEXT, SSL? * Possible errors include: .. code:: bash ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient) .. code:: bash Caused by: org.apache.kafka.common.KafkaException: java.lang.IllegalArgumentException: No serviceName defined in either JAAS or Kafka configuration .. code:: bash org.apache.kafka.common.errors.IllegalSaslStateException: Unexpected handshake request with client mechanism GSSAPI, enabled mechanisms are [GSSAPI] * Verify that the correct Java Authentication and Authorization Service (JAAS) configuration was detected. * If ACLs are enabled, check them. * To verify that you can communicate with the cluster, try to produce and consume using ``console-*`` with the same security settings. ^^^^^^^^^^^^^^^^^^^^^^^^^^ InvalidStateStoreException ^^^^^^^^^^^^^^^^^^^^^^^^^^ * This error usually indicates that data is corrupted in the configured ``confluent.controlcenter.data.dir``. For example, this can be caused by an unclean shutdown. To fix, give |c3-short| a new ID by changing ``confluent.controlcenter.id`` and restart. * Allow permission for the configured ``confluent.controlcenter.data.dir``. ^^^^^^^^^^^^^^^^^^ Not enough brokers ^^^^^^^^^^^^^^^^^^ Check the logs for the related error ``not enough brokers``. Verify the `topic replication factors <#check-configurations>`_ are set correctly and verify that there are enough brokers available. ^^^^^^^^^^^^^^^^^^^^^^^ Local store permissions ^^^^^^^^^^^^^^^^^^^^^^^ Check the local permissions in |c3-short| state directory. These settings are as defined in the config ``confluent.controlcenter.data.dir`` in the ``control-center.properties``. You can access that directory with the user ID that was used to start |c3-short|. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Multiple |c3-short|’s with the same ID ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You must use unique IDs for each |c3-short| instance, including instances in Docker. Duplicate IDs are not supported and will cause problems. ^^^^^^^^^^^^^^^ License expired ^^^^^^^^^^^^^^^ If you see a message similar to this: .. codewithvars:: bash [2017-08-21 14:12:33,812] WARN checking license failure. please contact `support@confluent.io `_ for a license key: Unable to process JOSE object (cause: org.jose4j.lang.JoseException: Invalid JOSE…. You should verify that the user has a valid license, as specified in ``confluent.license=``. This can be either the key or a path to a license file. For more information, see the :ref:`Control Center configuration documentation `. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A schema for message values has not been set for this topic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you encounter this error message, you should verify that the |sr| ``listeners=http://0.0.0.0:8081`` configuration matches the |c3-short| ``confluent.controlcenter.schema.registry.url=http://localhost:8081`` configuration. For more information, see :ref:`control_center_logging_settings`. System health ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Web interface that is blank or stuck loading ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you experience a web interface that is blank or stuck loading, you can select the cluster in the drop-down and use the information below to troubleshoot. * Are there `errors or warnings in the logs <#check-logs>`_? For more information on how to find logs, see the :ref:`documentation `. * `What are you monitoring <#size-of-clusters>`_? Are you `under-provisioned <#system-check>`_? * Is there `a lag in Control Center <#consumer-offset-lag>`_? Especially on the ``MetricsAggregateStore`` partitions * Use browser debugging tools to check REST calls to find out if the requests have been made successfully and with a valid response, specifically these requests: .. figure:: ../images/c3-troubleshoot.png **Tip:** You can view these calls by using common web browser tools (e.g., `Chrome Developer Tools `_). * The ``/2.0/metrics//maxtime`` endpoint should return the latest timestamp that |c3-short| has for metrics data. * If no data is returned from the backend, verify that you're getting data on the input topic and review the logs for issues. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The |c3-short| is getting ready to launch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If a rocket ship graphic appears in the web interface, use the information below to troubleshoot. .. figure:: ../images/rocketship.png * Is the correct cluster selected in the drop-down? * Usually this means that |ak-tm| doesn't have any metrics data, but this image could also indicate a 500 Internal Server Error has occurred. * If you get a 500 error, check the |c3-short| logs for errors. * Use browser debugging tools to check the response. An empty response (``{ }``) from the ``/2.0/metrics//maxtime`` endpoint means that |ak| hasn't received any metrics data. * Check your |c3-short| log output for WARN messages: ``broker= is not instrumented with ConfluentMetricsReporter``. If you see this warning, be sure to implement the :ref:`instructions ` for configuring |ak| Server with ``metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter``. Restart the |ak| brokers to pick up the configuration change. * Verify that the metrics reporter is set up correctly. Dump the ``_confluent-metrics`` input topic to see if there are any messages produced. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nothing is produced on the Metrics (``_confluent-metrics``) topic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Verify that the :ref:`metrics reporter is set up correctly ` with security configured. * Check the |ak| broker logs and look for timeouts or other errors (e.g., `RecordTooLargeException <#RecordTooLargeException>`_) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |c3-short| is lagging behind |ak| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If |c3-short| is not reporting the latest data and the charts are falling behind, you can use this information to troubleshoot. * This can happen if |c3-short| is underpowered or churning through loads of backlog. * Check the `offset lag <#consumer-offset-lag>`_. If lag is large and increasing over time, |c3-short| may not be able to handle the monitoring load. Try these additional checks for `cluster <#size-of-clusters>`_ and `system <#system-check>`_. * With |cp| 3.3.x and later, you can set a short amount of time for the skip backlog monitoring settings: ``confluent.monitoring.interceptor.topic.skip.backlog.minutes`` and ``confluent.metrics.topic.skip.backlog.minutes``. For example, you can set this to ``0`` if you want to process from the latest offsets. |c3-short| will ignore everything on the input topics older than a specified amount of time. This is useful when you need |c3-short| to catch up faster. For more information, see the :ref:`Control Center configuration documentation `. .. _record-too-large-exception: ^^^^^^^^^^^^^^^^^^^^^^^ RecordTooLargeException ^^^^^^^^^^^^^^^^^^^^^^^ If you receive this error in the broker logs, you can use this information to troubleshoot. * Set ``confluent.metrics.reporter.max.request.size=10485760`` in broker the ``server.properties`` file. This is the default in 3.3.x and later. * Change the topic configuration for ``_confluent-metrics`` to accept large messages. This is the default in 3.3.x and later. For more information, see the :ref:`Metrics Reporter message size documentation `. .. sourcecode:: bash bin/kafka-topics.sh --zookeeper --alter --config max.message.bytes=10485760 --topic _confluent-metrics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Parts of the broker or topic table have blank values ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is a known issue that should be transient until |c3-short| is caught up. It can be caused by: * Different streams topologies that are processing at different rates during restore. * |c3-short| is lagging or having trouble keeping up due to lack of resources. Streams Monitoring ^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^ Blank charts ^^^^^^^^^^^^ If you are experiencing blank charts, you can use this information to troubleshoot. * Verify that the :ref:`Confluent Monitoring Interceptors ` are properly configured on the clients, including any required security configuration settings. * For the time range selected, check if there is new data arriving to the `_confluent-monitoring topic <#review-input-topics>`_. * It is normal for |c3-short| to not show unconsumed messages because Confluent doesn't know the expected consumption, so verify if there are consumers reading from the topics. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Unexpected herringbone pattern ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are experiencing an unexpected herringbone pattern, you can use this information to troubleshoot. * Verify whether the clients are properly shut down. * Look for these errors in client logs: * ``Failed to shutdown metrics reporting thread...`` * ``Failed to publish all cached metrics on termination for...`` * ``ERROR Terminating publishing and collecting monitoring metrics for`` * ``Failed to close monitoring interceptor for…`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Missing consumers or consumer groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are missing consumers or consumer groups, you can use this information to troubleshoot. * Look for errors or warnings in the missing client’s log. * Verify whether the input topic is receiving interceptor data for the missing client. Connect ^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The |c3-short| is getting ready to launch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If a rocket ship graphic appears in the web interface, use the information below to troubleshoot. .. figure:: ../images/rocketship.png * Is the Connect cluster that is defined in ``confluent.controlcenter.connect.cluster`` available? * Can you reach the Connect endpoints directly by running a cURL command (e.g., ``curl www.example.com``)? * Check the Connect logs for any errors. |c3-short| is a proxy to Connect. Debugging ========= Check logs ^^^^^^^^^^ These are the |c3-short| log types. * ``c3.log`` - |c3-short| and HTTP log that are not related to streams * ``c3-streams.log`` - Streams * ``c3-kafka.log`` - Client, |zk|, and |ak| Here are things to look for in the logs: * ``ERROR`` * ``shutdown`` * ``Exceptions`` - verify that the brokers can be reached * ``WARN`` * ``Healthcheck`` errors and warnings If nothing is obvious, turn `DEBUG logging on <#enable-debug-and-trace-logging>`_ and restart |c3-short|. Enable debug and trace logging ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Open the ``/etc/confluent-control-center/log4j.properties`` file. This file is referenced by the CONTROL_CENTER_LOG4J_OPTS environment variable. #. Set your debugging options: - To enable debug logging, change the log level to debug at the root level: .. code:: bash log4j.rootLogger=DEBUG, stdout - To enable trace logging, change the root logger to trace at the root level: .. code:: bash log4j.rootLogger=TRACE, stdout - To enable additional streams logging, particularly at the request of Support, follow this example: .. codewithvars:: bash log4j.rootLogger=DEBUG, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.streams=org.apache.log4j.ConsoleAppender log4j.appender.streams.layout=org.apache.log4j.PatternLayout log4j.appender.streams.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.appender.streams.filter.1=io.confluent.Log4jRateFilter # Allows everything that is greater than or equal to specified level log4j.appender.streams.filter.1.level=TRACE # Allows rate/second logs at less than specified level #log4j.appender.streams.filter.1.rate=25 log4j.logger.kafka=ERROR, stdout log4j.logger.org.apache.kafka.streams=INFO, streams log4j.additivity.org.apache.kafka.streams=false log4j.logger.io.confluent.controlcenter.streams=INFO, streams log4j.additivity.io.confluent.controlcenter.streams=false log4j.logger.org.apache.zookeeper=ERROR, stdout log4j.logger.org.apache.kafka=ERROR, stdout log4j.logger.org.I0Itec.zkclient=ERROR, stdout #. Restart |c3-short|. .. codewithvars:: bash ./bin/control-center-stop ./bin/control-center-start ../etc/confluent-control-center/control-center.properties Check configurations ^^^^^^^^^^^^^^^^^^^^ * Is security enabled? Check the :ref:`security configuration settings ` on the broker, clients, and |c3-short|. * Verify that the prefixes are correct. * Are the metrics reporter and interceptors installed and configured correctly? * Verify the topic configurations for all |c3-short| topics: replication factor, timestamp type, min isr, retention. You can use this command, |zk| host and port (````) are specified. Verify that the correct configurations are picked up by each process. .. codewithvars:: bash ./bin/kafka-topics --zookeeper --describe Review input topics ^^^^^^^^^^^^^^^^^^^ * ``_confluent-monitoring`` and ``_confluent-metrics`` are the entry points for |c3-short| data * Verify that the input topics are created, where host and port (````), and topic (````) are specified: .. codewithvars:: bash bin/kafka-topics.sh --zookeeper --topic * Verify that data is being produced in the input topics. The security settings must be properly configured in the consumer for this to work. This is accomplished by specifying the properties file that was used to start |c3-short| (e.g., ``control-center.properties``) in the following command, and setting ```` to the topic you wish to read. .. codewithvars:: bash bin/control-center-console-consumer config/control-center.properties --topic {"clientType":"PRODUCER","clientId":"rock-client-producer-4","group... {"clientType":"CONSUMER","clientId":"rock-client-consumer-2","group... Size of clusters ^^^^^^^^^^^^^^^^ For examples on how to size your environment, review the :ref:`Control Center example deployments `. System check ^^^^^^^^^^^^ Check the system level metrics where |c3-short| is running; including CPU, memory, disk, and JVM settings. Are the settings within the :ref:`recommended values `? Frontend and REST API ^^^^^^^^^^^^^^^^^^^^^ * Using browser debugging tools, view the network settings to verify the request and response are showing the correct data. **Tip:** You can right-click on the row to copy content as HAR or cURL. .. figure:: ../images/save-as-curl.png * The backend REST calls are logged in ``c3.log``. Consumer offset lag ^^^^^^^^^^^^^^^^^^^ Verify that all offset lags for |c3-short| topics are not increasing over time. Review the ``MetricsAggregateStore`` and ``aggregate-rekey`` topics as they are often the bottleneck. You will need to run this command multiple times to observe the trend, where |c3-short| version (````) and ID (``control-center-id``) are specified. .. codewithvars:: bash ./bin/kafka-consumer-groups --bootstrap-server --describe --group _confluent-controlcenter-- Enable GC logging ^^^^^^^^^^^^^^^^^ Enable GC logs, restart |c3-short| with the following, where directory (````) is specified: .. codewithvars:: bash CONTROL_CENTER_JVM_PERFORMANCE_OPTS="-server -verbose:gc -Xloggc:/gc.log -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCCause -Djava.awt.headless=true" Thread dump ^^^^^^^^^^^ Run this command for a thread dump: .. codewithvars:: bash jstack -l $(jcmd | grep -i 'controlcenter\.ControlCenter' | awk '{print $1}') > jstack.out Data directory ^^^^^^^^^^^^^^ The |c3-short| local state is stored in ``confluent.controlcenter.data.dir``. You can use this command to determine the size of your data directory (````). .. codewithvars:: bash du -h