Skip to content

Commit

Permalink
Several small doc updates for the 4.4 release.
Browse files Browse the repository at this point in the history
Fixes #5929
  • Loading branch information
Paul Echeverri committed Jan 17, 2016
1 parent 0cc7326 commit f0f639e
Show file tree
Hide file tree
Showing 3 changed files with 38 additions and 20 deletions.
18 changes: 13 additions & 5 deletions docs/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,8 @@ yellow open logstash-2015.05.20 5 1 4750 0 16.4mb
[[tutorial-define-index]]
=== Defining Your Index Patterns

Each set of data loaded to Elasticsearch has an <<settings-create-pattern,index pattern>>. In the previous section, the Shakespeare data set has an index named `shakespeare`, and the accounts
Each set of data loaded to Elasticsearch has an <<settings-create-pattern,index pattern>>. In the previous section, the
Shakespeare data set has an index named `shakespeare`, and the accounts
data set has an index named `bank`. An _index pattern_ is a string with optional wildcards that can match multiple
indices. For example, in the common logging use case, a typical index name contains the date in MM-DD-YYYY
format, and an index pattern for May would look something like `logstash-2015.05*`.
Expand All @@ -211,6 +212,9 @@ The Logstash data set does contain time-series data, so after clicking *Add New*
set, make sure the *Index contains time-based events* box is checked and select the `@timestamp` field from the
*Time-field name* drop-down.

NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data.

[float]
[[tutorial-discovering]]
=== Discovering Your Data
Expand Down Expand Up @@ -288,8 +292,10 @@ This shows you what proportion of the 1000 accounts fall in these balance ranges
we're going to add another bucket aggregation. We can break down each of the balance ranges further by the account
holder's age.

Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from the drop-downs.
Click the green *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new results.
Click *Add sub-buckets* at the bottom, then select *Split Slices*. Choose the *Terms* aggregation and the *age* field from
the drop-downs.
Click the green *Apply changes* button image:images/apply-changes-button.png[] to add an external ring with the new
results.

image::images/tutorial-visualize-pie-3.png[]

Expand Down Expand Up @@ -321,7 +327,8 @@ as well as change many other options for your visualizations, by clicking the *O
Now that you have a list of the smallest casts for Shakespeare plays, you might also be curious to see which of these
plays makes the greatest demands on an individual actor by showing the maximum number of speeches for a given part. Add
a Y-axis aggregation with the *Add metrics* button, then choose the *Max* aggregation for the *speech_number* field. In
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the green *Apply changes* button image:images/apply-changes-button.png[]. Your
the *Options* tab, change the *Bar Mode* drop-down to *grouped*, then click the green *Apply changes* button
image:images/apply-changes-button.png[]. Your
chart should now look like this:

image::images/tutorial-visualize-bar-3.png[]
Expand Down Expand Up @@ -371,7 +378,8 @@ Write the following text in the field:
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.

Click the green *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the preview pane:
Click the green *Apply changes* button image:images/apply-changes-button.png[] to display the rendered Markdown in the
preview pane:

image::images/tutorial-visualize-md-2.png[]

Expand Down
28 changes: 14 additions & 14 deletions docs/releasenotes.asciidoc
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
[[releasenotes]]
== Kibana 4.3 Release Notes
== Kibana 4.4 Release Notes

The 4.3 release of Kibana requires Elasticsearch 2.1 or later.
The 4.4 release of Kibana requires Elasticsearch 2.2 or later.

Using event times to create index names is *deprecated* in this release of Kibana. Support for this functionality will be
removed entirely in the next major Kibana release. Elasticsearch 2.1 includes sophisticated date parsing APIs that Kibana
uses to determine date information, removing the need to specify dates in the index pattern name.
Using event times to create index names is no longer supported as of this release. Current versions of Elasticsearch
include sophisticated date parsing APIs that Kibana uses to determine date information, removing the need to specify dates
in the index pattern name.

[float]
[[enhancements]]
== Enhancements

* {k4issue}5109[Issue 5109]: Adds custom JSON and filter alias naming for filters.
* {k4issue}1726[Issue 1726]: Adds a color field formatter for value ranges in numeric fields.
* {k4issue}4342[Issue 4342]: Increased performance for wildcard indices.
* {k4issue}1600[Issue 1600]: Support for global time zones.
* {k4pull}5275[Pull Request 5275]: Highlighting values in Discover can now be disabled.
* {k4issue}5212[Issue 5212]: Adds support for multiple certificate authorities.
* {k4issue}2716[Issue 2716]: The open/closed position of the spy panel now persists across UI state changes.
// * {k4issue}5109[Issue 5109]: Adds custom JSON and filter alias naming for filters.
// * {k4issue}1726[Issue 1726]: Adds a color field formatter for value ranges in numeric fields.
// * {k4issue}4342[Issue 4342]: Increased performance for wildcard indices.
// * {k4issue}1600[Issue 1600]: Support for global time zones.
// * {k4pull}5275[Pull Request 5275]: Highlighting values in Discover can now be disabled.
// * {k4issue}5212[Issue 5212]: Adds support for multiple certificate authorities.
// * {k4issue}2716[Issue 2716]: The open/closed position of the spy panel now persists across UI state changes.

[float]
[[bugfixes]]
== Bug Fixes

* {k4issue}5165[Issue 5165]: Resolves a display error in embedded views.
* {k4issue}5021[Issue 5021]: Improves visualization dimming for dashboards with auto-refresh.
// * {k4issue}5165[Issue 5165]: Resolves a display error in embedded views.
// * {k4issue}5021[Issue 5021]: Improves visualization dimming for dashboards with auto-refresh.
12 changes: 11 additions & 1 deletion docs/settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,17 @@ list.
contains time-based events* option and select the index field that contains the timestamp. Kibana reads the index
mapping to list all of the fields that contain a timestamp.

. By default, Kibana restricts wildcard expansion of time-based index patterns to indices with data within the currently
selected time range. Click *Do not expand index pattern when search* to disable this behavior.

. Click *Create* to add the index pattern.

. To designate the new pattern as the default pattern to load when you view the Discover tab, click the *favorite*
button.

NOTE: When you define an index pattern, indices that match that pattern must exist in Elasticsearch. Those indices must
contain data.

To use an event time in an index name, enclose the static text in the pattern and specify the date format using the
tokens described in the following table.

Expand Down Expand Up @@ -195,6 +201,8 @@ Scripted fields compute data on the fly from the data in your Elasticsearch indi
the Discover tab as part of the document data, and you can use scripted fields in your visualizations.
Scripted field values are computed at query time so they aren't indexed and cannot be searched.

NOTE: Kibana cannot query scripted fields.

WARNING: Computing data on the fly with scripted fields can be very resource intensive and can have a direct impact on
Kibana's performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
buggy, you'll get exceptions whenever you try to view the dynamically generated data.
Expand Down Expand Up @@ -449,10 +457,12 @@ To export a set of objects:
. Click the selection box for the objects you want to export, or click the *Select All* box.
. Click *Export* to select a location to write the exported JSON.

WARNING: Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before
importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.

To import a set of objects:

. Go to *Settings > Objects*.
. Click *Import* to navigate to the JSON file representing the set of objects to import.
. Click *Open* after selecting the JSON file.
. If any objects in the set would overwrite objects already present in Kibana, confirm the overwrite.

0 comments on commit f0f639e

Please sign in to comment.