An index receives approximately 50GB of data per day per indexer at an even and
consistent rate. The customer would like to keep this data searchable for a minimum of 30
days. In addition, they have hourly scheduled searches that process a week’s worth of data
and are quite sensitive to search performance.
Given ideal conditions (no restarts, nor drops/bursts in data volume), and following PS best
practices, which of the following sets of indexes.conf settings can be leveraged to meet the
requirements?
A. frozenTimePeriodInSecs, maxDataSize, maxVolumeDataSizeMB, maxHotBuckets
B. maxDataSize, maxTotalDataSizeMB, maxHotBuckets, maxGlobalDataSizeMB
C. maxDataSize, frozenTimePeriodInSecs, maxVolumeDataSizeMB
D. frozenTimePeriodInSecs, maxWarmDBCount, homePath.maxDataSizeMB, maxHotSpanSecs
Explanation: These are the indexes.conf settings that can be leveraged to meet the
requirements of the customer, given the following assumptions and calculations:
The customer has a single indexer with a single storage device for all data.
The customer wants to keep 30 days of data searchable, which means 30 x 50GB
= 1500GB of data in total.
The customer wants to optimize search performance for hourly scheduled
searches that process a week’s worth of data, which means 7 x 50GB = 350GB of
data per search.
The indexes.conf settings that can be used to achieve these goals are:
maxDataSize: This setting determines the maximum size of a bucket before it rolls
from hot to warm. By setting this to auto_high_volume, Splunk software will create
buckets that are approximately 10GB in size, which is suitable for high-volume
data sources. This will result in about 5 hot buckets and 145 warm buckets per
index, which will reduce the number of buckets that need to be searched and
improve search performance.
frozenTimePeriodInSecs: This setting determines how long Splunk software keeps
the data in an index before freezing or deleting it. By setting this to 2592000
seconds (30 days), Splunk software will ensure that the data is searchable for at
least 30 days, and then move it to the frozen state or delete it according to the
coldToFrozenDir or coldToFrozenScript settings.
maxVolumeDataSizeMB: This setting determines the maximum size of a volume
before Splunk software stops writing data to it. By setting this to 1536000 MB
(1500 GB), Splunk software will ensure that the volume can store exactly 30 days
of data, and then roll the oldest buckets to frozen or delete them according to the
frozenTimePeriodInSecs setting.
The other options are incorrect because they either do not meet the requirements of the
customer or they contain invalid or unnecessary settings. Option A is incorrect because
frozenTimePeriodInSecs and maxVolumeDataSizeMB are redundant settings, as they both
control the retention policy of the index. Also, maxHotBuckets is not relevant for search
performance, as it only affects how many hot buckets can exist per index.
Option B is incorrect because maxTotalDataSizeMB and maxGlobalDataSizeMB are not valid settings
in indexes.conf, as they are only used in server.conf to control the total size of all indexes
and all volumes respectively. Also, maxHotBuckets is not relevant for search performance,
as it only affects how many hot buckets can exist per index.
Option D is incorrect because
homePath.maxDataSizeMB and maxHotSpanSecs are not valid settings in indexes.conf,
as they are only used in volumes.conf to control the size and time span of hot buckets
within a volume respectively. Also, maxWarmDBCount is not relevant for search
performance, as it only affects how many warm buckets can exist per index.
What does Splunk do when it indexes events?
A. Extracts the top 10 fields.
B. Extracts metadata fields such as host, source, source type.
C. Performs parsing, merging, and typing processes on universal forwarders.
D. Create report acceleration summaries.
Explanation: When Splunk indexes events, it extracts metadata fields such as host, source, and source type from the raw data. These fields are used to identify and categorize the events, and to enable efficient searching and filtering. Splunk also assigns a unique identifier (_cd) and a timestamp (_time) to each event. Splunk does not extract the top 10 fields, perform parsing, merging, and typing processes on universal forwarders, or create report acceleration summaries during indexing. These are separate processes that occur either before or after indexing. Therefore, the correct answer is B. Extracts metadata fields such as host, source, source type.
A new search head cluster is being implemented. Which is the correct command to initialize the deployer node without restarting the search head cluster peers?
A. $SPLUNK_HOME/bin/splunk apply shcluster-bundle
B. $SPLUNK_HOME/bin/splunk apply cluster-bundle
C. $SPLUNK_HOME/bin/splunk apply shcluster-bundle –action stage
D. $SPLUNK_HOME/bin/splunk apply cluster-bundle –action stage
Explanation: To set up the HTTP Event Collector (HEC), each HEC input entry must contain a valid token. A token is a string of alphanumeric characters that acts as an identifier and an authentication code for the HEC input. You can generate a token when you create a new HEC input or use an existing token for multiple inputs. A token is required for sending data to HEC and for managing the HEC input settings.
A customer has a search cluster (SHC) of six members split evenly between two data centers (DC). The customer is concerned with network connectivity between the two DCs due to frequent outages. Which of the following is true as it relates to SHC resiliency when a network outage occurs between the two DCs?
A. The SHC will function as expected as the SHC deployer will become the new captain until the network communication is restored.
B. The SHC will stop all scheduled search activity within the SHC.
C. The SHC will function as expected as the minimum required number of nodes for a SHC is 3.
D. The SHC will function as expected as the SHC captain will fall back to previous active captain in the remaining site.
Explanation: The SHC will function as expected as the minimum required number of
nodes for a SHC is 3. This is because the SHC uses a quorum-based algorithm to
determine the cluster state and elect the captain. A quorum is a majority of cluster
members that can communicate with each other. As long as a quorum exists, the cluster
can continue to operate normally and serve search requests. If a network outage occurs
between the two data centers, each data center will have three SHC members, but only
one of them will have a quorum. The data center with the quorum will elect a new captain if
the previous one was in the other data center, and the other data center will lose its cluster
status and stop serving searches until the network communication is restored.
The other options are incorrect because they do not reflect what happens when a network
outage occurs between two data centers with a SHC. Option A is incorrect because the
SHC deployer will not become the new captain, as it is not part of the SHC and does not
participate in cluster activities. Option B is incorrect because the SHC will not stop all
scheduled search activity, as it will still run scheduled searches on the data center with the
quorum. Option D is incorrect because the SHC captain will not fall back to previous active
captain in the remaining site, as it will be elected by the quorum based on several factors,
such as load, availability, and priority.
A customer is using both internal Splunk authentication and LDAP for user management. If a username exists in both $SPLUNK_HOME/etc/passwd and LDAP, which of the following statements is accurate?
A. The internal Splunk authentication will take precedence.
B. Authentication will only succeed if the password is the same in both systems.
C. The LDAP user account will take precedence.
D. Splunk will error as it does not support overlapping usernames
Explanation: Splunk does not support overlapping usernames between internal Splunk authentication and LDAP. If a username exists in both $SPLUNK_HOME/etc/passwd and LDAP, Splunk will try to use the internal Splunk authentication first, as explained in the previous question. However, if the user tries to change their password or edit their account settings, Splunk will error with a message like "Cannot edit user: User exists in multiple realms". This is because Splunk cannot determine which authentication scheme to use for these actions. Therefore, it is recommended to avoid overlapping usernames between internal Splunk authentication and LDAP.
In a single indexer cluster, where should the Monitoring Console (MC) be installed?
A. Deployer sharing with master cluster.
B. License master that has 50 clients or more
C. Cluster master node
D. Production Search Head
Explanation: In a single indexer cluster, the best practice is to install the Monitoring
Console (MC) on the cluster master node. This is because the cluster master node has
access to all the information about the cluster state, such as the bucket status, the peer
status, the search head status, and the replication and search factors. The MC can use this
information to monitor the health and performance of the cluster and alert on any issues or
anomalies. The MC can also run distributed searches across all the peer nodes and collect
metrics and logs from them.
The other options are incorrect because they are not recommended locations for installing
the MC in a single indexer cluster. Option A is incorrect because the deployer should not
share with the master cluster, as this can cause conflicts and errors in applying
configuration bundles to the cluster. Option B is incorrect because the license master is not
a good candidate for hosting the MC, as it does not have direct access to the cluster
information and it might have a high load from managing license usage for many clients.
Option D is incorrect because the production search head is not a good candidate for
hosting the MC, as it might have a high load from serving user searches and dashboards,
and it might not be able to run distributed searches across all the peer nodes if it is not part
of the cluster.
Page 2 out of 15 Pages |
Previous |