updated and repatched search_api_solr module

no need for patch anymore
see https://www.drupal.org/node/1846860#comment-8137979
This commit is contained in:
Bachir Soussi Chiadmi 2015-04-20 19:22:33 +02:00
parent ca9413af4d
commit c5028bffd3
22 changed files with 444 additions and 2460 deletions

View File

@ -1,3 +1,35 @@
Search API Solr search 1.6 (09/08/2014):
----------------------------------------
- #2050961 by das-peter, drunken monkey: Added proximity/distance information
to search results.
- #2242073 by RaF: Fixed handling of custom negative filters in filter-only
searches.
- #2290601 by drunken monkey: Fixed handling of complex keywords and OR facets.
- #2083357 by drunken monkey: Added note that Solr 4.x server paths should be
specified with core.
- #2270767 by RaF: Fixed search_api_solr_views_data_alter() not always
returning all virtual fields.
Search API Solr search 1.5 (05/23/2014):
----------------------------------------
- #2216895 by das-peter: Added support for empty/non-empty conditions on
location field types.
- #2162627 by drunken monkey: Removed Solr 1.4 support.
- #2175829 by danquah, drunken monkey: Fixed error when admin interface is not
accessible.
- #2222037 by drunken monkey: Fixed "Files" tab in Solr 4.7.
- #2151719 by Derimagia, drunken monkey: Added an alter hook for multi-index
search results.
- #1776534 by drunken monkey, e2thex: Added support for using a Solr server
with multiple sites.
- #2152337 by drunken monkey: Removed confusing "multiple text fields" section
from README.txt.
- #2099559 by drunken monkey: Made optimizing the Solr server optional.
- #2146749 by drunken monkey: Added soft commits as the default for Solr 4.
- #1773440 by drunken monkey: Added performance improvement for “filter
only” queries.
- #2147573 by drunken monkey: Improved error handling.
Search API Solr search 1.4 (12/25/2013):
----------------------------------------
- #2157839 by drunken monkey, Nick_vh: Updated config files to the newest

View File

@ -19,25 +19,14 @@ somewhere outside of your web server's document tree.
[3] http://www.apache.org/dyn/closer.cgi/lucene/solr/
This module also supports Solr 1.4 and 3.x. For better performance and more
features, 4.x should be used, though. 1.4 is discouraged altogether, as several
features of the module don't work at all in 1.4.
This module also supports Solr 3.x. For better performance and more features,
4.x should be used, though.
For small websites, using the example application, located in $SOLR/example/,
usually suffices. In any case, you can use it for developing andd testing. The
usually suffices. In any case, you can use it for developing and testing. The
following instructions will assume you are using the example application,
otherwise you should be able to substitute the corresponding paths.
NOTE: The Solr 4.3+ example application is currently not completely supported
with the configuration files included in this module, due to a slight change in
directory structure. To fix this, simply copy, move or symlink the contrib/
directory from the top level of the extracted Solr package one level down to
example/.
(For other directory structures: the contrib/ directory has to be in the
directory two levels up from the one which includes the conf/ directory. For
help, just start the Solr server and check the log files for WARN messages
they should state in which place Solr expects the directory to be.)
CAUTION! For production sites, it is vital that you somehow prevent outside
access to the Solr server. Otherwise, attackers could read, corrupt or delete
all your indexed data. Using the example server WON'T prevent this by default.
@ -69,7 +58,7 @@ java -jar start.jar &
Afterwards, go to [4] in your web browser to ensure Solr is running correctly.
[4] http://localhost:8983/solr/#/
[4] http://localhost:8983/solr/
You can then enable this module and create a new server, using the "Solr search"
service class. Enter the hostname, port and path corresponding to your Solr
@ -77,3 +66,7 @@ server in the appropriate fields. The default values already correspond to the
example application, so you won't have to change the values if you use that.
If you are using HTTP Authentication to protect your Solr server you also have
to provide the appropriate user and password here.
NOTE: For Solr 4.x, the server's path should also contain the Solr core name.
E.g., when using the example application unchanged, set the path to
"/solr/collection1" instead of "/solr".

View File

@ -59,8 +59,7 @@ Regarding third-party features, the following are supported:
- search_api_data_type_location
Introduced by module: search_api_location
Lets you index, filter and sort on location fields. Note, however, that only
single-valued fields are currently supported for Solr 3.x, and that the option
isn't supported at all in Solr 1.4.
single-valued fields are currently supported for Solr 3.x.
- search_api_grouping
Introduced by module: search_api_grouping [5]
Lets you group search results based on indexed fields. For further information
@ -86,13 +85,6 @@ classic preprocessing tasks. Enabling the HTML filter can be useful, though, as
the default config files included in this module don't handle stripping out HTML
tags.
Also, due to the way Solr works, using a single field for fulltext searching
will result in the smallest index size and best search performance, as well as
possibly having other advantages, too. Therefore, if you don't need to search
different sets of fields in different searches on an index, it is adviced that
you collect all fields that should be searchable into a single field using the
“Aggregated fields” data alteration.
Clean field identifiers:
If your Solr server was created in a module version prior to 1.2, you will get
the option to switch the server to "Clean field identifiers" (which is default
@ -130,6 +122,25 @@ Hidden variables
The maximum number of bytes that can be handled as an HTTP GET query when
HTTP method is AUTO. Typically Solr can handle up to 65355 bytes, but Tomcat
and Jetty will error at slightly less than 4096 bytes.
- search_api_solr_cron_action (default: "spellcheck")
The Search API Solr Search module can automatically execute some upkeep
operations daily during cron runs. This variable determines what particular
operation is carried out.
- spellcheck: The "default" spellcheck dictionary used by Solr will be rebuilt
so that spellchecking reflects the latest index state.
- optimize: An "optimize" operation [8] is executed on the Solr server. As a
result of this, all spellcheck dictionaries (that have "buildOnOptimize" set
to "true") will be rebuilt, too.
- none: No action is executed.
If an unknown setting is encountered, it is interpreted as "none".
- search_api_solr_site_hash (default: random)
A unique hash specific to the local site, created the first time it is needed.
Only change this if you want to display another server's results and you know
what you are doing. Old indexed items will be lost when the hash is changed
and all items will have to be reindexed. Can only contain alphanumeric
characters.
[8] http://wiki.apache.org/solr/UpdateXmlMessages#A.22commit.22_and_.22optimize.22
Customizing your Solr server
----------------------------
@ -138,11 +149,11 @@ The schema.xml and solrconfig.xml files contain extensive comments on how to
add additional features or modify behaviour, e.g., for adding a language-
specific stemmer or a stopword list.
If you are interested in further customizing your Solr server to your needs,
see the Solr wiki at [8] for documentation. When editing the schema.xml and
see the Solr wiki at [9] for documentation. When editing the schema.xml and
solrconfig.xml files, please only edit the copies in the Solr configuration
directory, not directly the ones provided with this module.
[8] http://wiki.apache.org/solr/
[9] http://wiki.apache.org/solr/
You'll have to restart your Solr server after making such changes, for them to
take effect.

View File

@ -79,19 +79,22 @@ class SearchApiSolrService extends SearchApiAbstractService {
'excerpt' => FALSE,
'retrieve_data' => FALSE,
'highlight_data' => FALSE,
'skip_schema_check' => FALSE,
'solr_version' => '',
'http_method' => 'AUTO',
// Default to TRUE for new servers, but to FALSE for existing ones.
'clean_ids' => $this->options ? FALSE : TRUE,
'site_hash' => $this->options ? FALSE : TRUE,
'autocorrect_spell' => TRUE,
'autocorrect_suggest_words' => TRUE,
);
if (!$options['clean_ids']) {
if (module_exists('advanced_help')) {
$variables['@url']= url('help/search_api_solr/README.txt');
$variables['@url'] = url('help/search_api_solr/README.txt');
}
else {
$variables['@url']= url(drupal_get_path('module', 'search_api_solr') . '/README.txt');
$variables['@url'] = url(drupal_get_path('module', 'search_api_solr') . '/README.txt');
}
$description = t('Change Solr field names to be more compatible with advanced features. Doing this leads to re-indexing of all indexes on this server. See <a href="@url">README.txt</a> for details.', $variables);
$form['clean_ids_form'] = array(
@ -111,6 +114,25 @@ class SearchApiSolrService extends SearchApiAbstractService {
'#value' => $options['clean_ids'],
);
if (!$options['site_hash']) {
$description = t('If you want to index content from multiple sites on a single Solr server, you should enable the multi-site compatibility here. Note, however, that this will completely clear all search indexes (from this site) lying on this server. All content will have to be re-indexed.');
$form['site_hash_form'] = array(
'#type' => 'fieldset',
'#title' => t('Multi-site compatibility'),
'#description' => $description,
'#collapsible' => TRUE,
);
$form['site_hash_form']['submit'] = array(
'#type' => 'submit',
'#value' => t('Turn on multi-site compatibility and clear all indexes'),
'#submit' => array('_search_api_solr_switch_to_site_hash'),
);
}
$form['site_hash'] = array(
'#type' => 'value',
'#value' => $options['site_hash'],
);
$form['scheme'] = array(
'#type' => 'select',
'#title' => t('HTTP protocol'),
@ -139,7 +161,7 @@ class SearchApiSolrService extends SearchApiAbstractService {
$form['path'] = array(
'#type' => 'textfield',
'#title' => t('Solr path'),
'#description' => t('The path that identifies the Solr instance to use on the server.'),
'#description' => t('The path that identifies the Solr instance to use on the server. (For Solr 4.x servers, this should include the name of the core to use.)'),
'#default_value' => $options['path'],
);
@ -188,6 +210,24 @@ class SearchApiSolrService extends SearchApiAbstractService {
'#description' => t('When retrieving result data from the Solr server, try to highlight the search terms in the returned fulltext fields.'),
'#default_value' => $options['highlight_data'],
);
$form['advanced']['skip_schema_check'] = array(
'#type' => 'checkbox',
'#title' => t('Skip schema verification'),
'#description' => t('Skip the automatic check for schema-compatibillity. Use this override if you are seeing an error-message about an incompatible schema.xml configuration file, and you are sure the configuration is compatible.'),
'#default_value' => $options['skip_schema_check'],
);
$form['advanced']['solr_version'] = array(
'#type' => 'select',
'#title' => t('Solr version override'),
'#description' => t('Specify the Solr version manually in case it cannot be retrived automatically. The version can be found in the Solr admin interface under "Solr Specification Version" or "solr-spec"'),
'#options' => array(
'' => t('Determine automatically'),
'1' => '1.4',
'3' => '3.x',
'4' => '4.x',
),
'#default_value' => $options['solr_version'],
);
// Highlighting retrieved data only makes sense when we retrieve data.
// (Actually, internally it doesn't really matter. However, from a user's
// perspective, having to check both probably makes sense.)
@ -382,15 +422,17 @@ class SearchApiSolrService extends SearchApiAbstractService {
);
$status = 'ok';
if (substr($stats_summary['@schema_version'], 0, 10) == 'search-api') {
drupal_set_message(t('Your schema.xml version is too old. Please replace all configuration files with the ones packaged with this module and re-index you data.'), 'error');
$status = 'error';
}
elseif (substr($stats_summary['@schema_version'], 0, 9) != 'drupal-4.') {
$variables['@url'] = url(drupal_get_path('module', 'search_api_solr') . '/INSTALL.txt');
$message = t('You are using an incompatible schema.xml configuration file. Please follow the instructions in the <a href="@url">INSTALL.txt</a> file for setting up Solr.', $variables);
drupal_set_message($message, 'error');
$status = 'error';
if (empty($this->options['skip_schema_check'])) {
if (substr($stats_summary['@schema_version'], 0, 10) == 'search-api') {
drupal_set_message(t('Your schema.xml version is too old. Please replace all configuration files with the ones packaged with this module and re-index you data.'), 'error');
$status = 'error';
}
elseif (substr($stats_summary['@schema_version'], 0, 9) != 'drupal-4.') {
$variables['@url'] = url(drupal_get_path('module', 'search_api_solr') . '/INSTALL.txt');
$message = t('You are using an incompatible schema.xml configuration file. Please follow the instructions in the <a href="@url">INSTALL.txt</a> file for setting up Solr.', $variables);
drupal_set_message($message, 'error');
$status = 'error';
}
}
$info[] = array(
'label' => t('Schema'),
@ -477,16 +519,21 @@ class SearchApiSolrService extends SearchApiAbstractService {
if (module_exists('search_api_multi') && module_exists('search_api_views')) {
views_invalidate_cache();
}
$id = is_object($index) ? $index->machine_name : $index;
$index_id = is_object($index) ? $index->machine_name : $index;
// Only delete the index's data if the index isn't read-only.
if (!is_object($index) || empty($index->read_only)) {
$this->connect();
try {
$this->solr->deleteByQuery("index_id:" . $this->getIndexId($id));
}
catch (Exception $e) {
throw new SearchApiException($e->getMessage());
$index_id = $this->getIndexId($index_id);
// Since the index ID we use for indexing can contain arbitrary
// prefixes, we have to escape it for use in the query.
$index_id = call_user_func(array($this->connection_class, 'phrase'), $index_id);
$query = "index_id:$index_id";
if (!empty($this->options['site_hash'])) {
// We don't need to escape the site hash, as that consists only of
// alphanumeric characters.
$query .= ' hash:' . search_api_solr_site_hash();
}
$this->solr->deleteByQuery($query);
}
}
@ -498,27 +545,53 @@ class SearchApiSolrService extends SearchApiAbstractService {
$ret = array();
$index_id = $this->getIndexId($index->machine_name);
$fields = $this->getFieldNames($index);
$languages = language_list();
$base_urls = array();
foreach ($items as $id => $item) {
try {
$doc = new SearchApiSolrDocument();
$doc->setField('id', $this->createId($index_id, $id));
$doc->setField('index_id', $index_id);
$doc->setField('item_id', $id);
$doc = new SearchApiSolrDocument();
$doc->setField('id', $this->createId($index_id, $id));
$doc->setField('index_id', $index_id);
$doc->setField('item_id', $id);
foreach ($item as $key => $field) {
if (!isset($fields[$key])) {
throw new SearchApiException(t('Unknown field @field.', array('@field' => $key)));
// If multi-site compatibility is enabled, add the site hash and
// language-specific base URL.
if (!empty($this->options['site_hash'])) {
$doc->setField('hash', search_api_solr_site_hash());
$lang = $item['search_api_language']['value'];
if (empty($base_urls[$lang])) {
$url_options = array('absolute' => TRUE);
if (isset($languages[$lang])) {
$url_options['language'] = $languages[$lang];
}
$this->addIndexField($doc, $fields[$key], $field['value'], $field['type']);
$base_urls[$lang] = url(NULL, $url_options);
}
$doc->setField('site', $base_urls[$lang]);
}
// Now add all fields contained in the item, with dynamic fields.
foreach ($item as $key => $field) {
// If the field is not known for the index, something weird has
// happened. We refuse to index the items and hope that the others are
// OK.
if (!isset($fields[$key])) {
$type = search_api_get_item_type_info($index->item_type);
$vars = array(
'@field' => $key,
'@type' => $type ? $type['name'] : $index->item_type,
'@id' => $id,
);
watchdog('search_api_solr', 'Error while indexing: Unknown field @field set for @type with ID @id.', $vars, WATCHDOG_WARNING);
$doc = NULL;
break;
}
$this->addIndexField($doc, $fields[$key], $field['value'], $field['type']);
}
if ($doc) {
$documents[] = $doc;
$ret[] = $id;
}
catch (Exception $e) {
watchdog_exception('search_api_solr', $e, "%type while indexing @type with ID @id: !message in %function (line %line of %file).", array('@type' => $index->item_type, '@id' => $id), WATCHDOG_WARNING);
}
}
// Let other modules alter documents before sending them to solr.
@ -545,10 +618,14 @@ class SearchApiSolrService extends SearchApiAbstractService {
/**
* Creates an ID used as the unique identifier at the Solr server.
*
* This has to consist of both index and item ID.
* This has to consist of both index and item ID. Optionally, the site hash is
* also included.
*
* @see search_api_solr_site_hash()
*/
protected function createId($index_id, $item_id) {
return "$index_id-$item_id";
$site_hash = !empty($this->options['site_hash']) ? search_api_solr_site_hash() . '-' : '';
return "$site_hash$index_id-$item_id";
}
/**
@ -690,25 +767,30 @@ class SearchApiSolrService extends SearchApiAbstractService {
*/
public function deleteItems($ids = 'all', SearchApiIndex $index = NULL) {
$this->connect();
if ($index) {
if (is_array($ids)) {
$index_id = $this->getIndexId($index->machine_name);
if (is_array($ids)) {
$solr_ids = array();
foreach ($ids as $id) {
$solr_ids[] = $this->createId($index_id, $id);
}
$this->solr->deleteByMultipleIds($solr_ids);
}
elseif ($ids == 'all') {
$this->solr->deleteByQuery("index_id:" . $index_id);
}
else {
$this->solr->deleteByQuery("index_id:" . $index_id . ' (' . $ids . ')');
$solr_ids = array();
foreach ($ids as $id) {
$solr_ids[] = $this->createId($index_id, $id);
}
$this->solr->deleteByMultipleIds($solr_ids);
}
else {
$q = $ids == 'all' ? '*:*' : $ids;
$this->solr->deleteByQuery($q);
$query = array();
if ($index) {
$index_id = $this->getIndexId($index->machine_name);
$index_id = call_user_func(array($this->connection_class, 'phrase'), $index_id);
$query[] = "index_id:$index_id";
}
if (!empty($this->options['site_hash'])) {
// We don't need to escape the site hash, as that consists only of
// alphanumeric characters.
$query[] = 'hash:' . search_api_solr_site_hash();
}
if ($ids != 'all') {
$query[] = $query ? "($ids)" : $ids;
}
$this->solr->deleteByQuery($query ? implode(' ', $query) : '*:*');
}
$this->scheduleCommit();
}
@ -749,7 +831,12 @@ class SearchApiSolrService extends SearchApiAbstractService {
// Extract filters.
$filter = $query->getFilter();
$fq = $this->createFilterQueries($filter, $fields, $index->options['fields']);
$fq[] = 'index_id:' . $index_id;
$fq[] = 'index_id:' . call_user_func(array($this->connection_class, 'phrase'), $index_id);
if (!empty($this->options['site_hash'])) {
// We don't need to escape the site hash, as that consists only of
// alphanumeric characters.
$fq[] = 'hash:' . search_api_solr_site_hash();
}
// Extract sort.
$sort = array();
@ -855,6 +942,19 @@ class SearchApiSolrService extends SearchApiAbstractService {
}
}
// Add parameters to fetch distance, if requested.
if (!empty($spatial['distance']) && $version >= 4) {
if (strpos($field, ':') === FALSE) {
// Add pseudofield with the distance to the result items.
$location_fields[] = '_' . $field . '_distance_:geodist(' . $field . ',' . $point . ')';
}
else {
$link = l(t('edit server'), 'admin/config/search/search_api/server/' . $this->server->machine_name . '/edit');
watchdog('search_api_solr', "Location distance information can't be added because unclean field identifiers are used.", array(), WATCHDOG_WARNING, $link);
}
}
// Change the facet parameters for spatial fields to return distance
// facets.
if (!empty($facets)) {
@ -982,6 +1082,10 @@ class SearchApiSolrService extends SearchApiAbstractService {
if (!empty($this->options['retrieve_data'])) {
$params['fl'] = '*,score';
}
if (!empty($location_fields)) {
$params['fl'] .= ',' . implode(',', $location_fields);
}
// Retrieve http method from server options.
$http_method = !empty($this->options['http_method']) ? $this->options['http_method'] : 'AUTO';
@ -1051,6 +1155,7 @@ class SearchApiSolrService extends SearchApiAbstractService {
$index = $query->getIndex();
$fields = $this->getFieldNames($index);
$field_options = $index->options['fields'];
$version = $this->solr->getSolrVersion();
// Set up the results array.
$results = array();
@ -1087,6 +1192,7 @@ class SearchApiSolrService extends SearchApiAbstractService {
$results['result count'] = $response->response->numFound;
$docs = $response->response->docs;
}
$spatials = $query->getOption('search_api_location');
// Add each search result to the results array.
foreach ($docs as $doc) {
@ -1119,6 +1225,22 @@ class SearchApiSolrService extends SearchApiAbstractService {
$result['id'] = $result['fields']['search_api_id'];
$result['score'] = $result['fields']['search_api_relevance'];
// If location based search is enabled ensure the calculated distance is
// set to the appropriate field. If the calculation wasn't possible add
// the coordinates to allow calculation.
if ($spatials) {
foreach ($spatials as $spatial) {
if (isset($spatial['field']) && !empty($spatial['distance'])) {
if ($version >= 4) {
$doc_field = '_' . $fields[$spatial['field']] . '_distance_';
if (!empty($doc->{$doc_field})) {
$results['search_api_location'][$spatial['field']][$result['id']]['distance'] = $doc->{$doc_field};
}
}
}
}
}
$index_id = $this->getIndexId($index->machine_name);
$solr_id = $this->createId($index_id, $result['id']);
$excerpt = $this->getExcerpt($response, $solr_id, $result['fields'], $fields);
@ -1364,7 +1486,7 @@ class SearchApiSolrService extends SearchApiAbstractService {
}
return '((' . implode(') OR (', $k) . '))';
}
$k = implode($neg ? ' AND ' : ' ', $k);
$k = implode(' AND ', $k);
return $neg ? "*:* AND -($k)" : $k;
}
@ -1384,7 +1506,7 @@ class SearchApiSolrService extends SearchApiAbstractService {
$fq[] = $this->createFilterQuery($solr_fields[$f[0]], $f[1], $f[2], $fields[$f[0]]);
}
}
else {
elseif ($f instanceof SearchApiQueryFilterInterface) {
$q = $this->createFilterQueries($f, $solr_fields, $fields);
if ($filter->getConjunction() != $f->getConjunction()) {
// $or == TRUE means the nested filter has conjunction AND, and vice versa
@ -1396,6 +1518,16 @@ class SearchApiSolrService extends SearchApiAbstractService {
}
}
}
if (method_exists($filter, 'getTags')) {
foreach ($filter->getTags() as $tag) {
$tag = "{!tag=$tag}";
foreach ($fq as $i => $filter) {
$fq[$i] = $tag . $filter;
}
// We can only apply one tag per filter.
break;
}
}
return ($or && count($fq) > 1) ? array('((' . implode(') OR (', $fq) . '))') : $fq;
}
@ -1405,6 +1537,16 @@ class SearchApiSolrService extends SearchApiAbstractService {
*/
protected function createFilterQuery($field, $value, $operator, $field_info) {
$field = call_user_func(array($this->connection_class, 'escapeFieldName'), $field);
// Special handling for location fields.
if (isset($field_info['real_type']) && $field_info['real_type'] == 'location') {
// Empty / non-empty comparison has to take place in one of the subfields
// of the location field type. These subfields are usually generated with
// the index and the field type as name suffix.
// @TODO Do we need to handle other operators / values too?
if ($value === NULL) {
$field .= '_0___tdouble';
}
}
if ($value === NULL) {
return ($operator == '=' ? '*:* AND -' : '') . "$field:[* TO *]";
}
@ -1458,7 +1600,6 @@ class SearchApiSolrService extends SearchApiAbstractService {
$facet_params['facet.limit'] = 10;
$facet_params['facet.mincount'] = 1;
$facet_params['facet.missing'] = 'false';
$taggedFields = array();
foreach ($facets as $info) {
if (empty($fields[$info['field']])) {
continue;
@ -1468,10 +1609,9 @@ class SearchApiSolrService extends SearchApiAbstractService {
// Check for the "or" operator.
if (isset($info['operator']) && $info['operator'] === 'or') {
// Remember that filters for this field should be tagged.
$escaped = call_user_func(array($this->connection_class, 'escapeFieldName'), $fields[$info['field']]);
$taggedFields[$escaped] = "{!tag=$escaped}";
$tag = 'facet:' . $info['field'];
// Add the facet field.
$facet_params['facet.field'][] = "{!ex=$escaped}$field";
$facet_params['facet.field'][] = "{!ex=$tag}$field";
}
else {
// Add the facet field.
@ -1490,20 +1630,6 @@ class SearchApiSolrService extends SearchApiAbstractService {
$facet_params["f.$field.facet.missing"] = 'true';
}
}
// Tag filters of fields with "OR" facets.
foreach ($taggedFields as $field => $tag) {
$regex = '#(?<![^( ])' . preg_quote($field, '#') . ':#';
foreach ($fq as $i => $filter) {
// Solr can't handle two tags on the same filter, so we don't add two.
// Another option here would even be to remove the other tag, too,
// since we can be pretty sure that this filter does not originate from
// a facet however, wrong results would still be possible, and this is
// definitely an edge case, so don't bother.
if (preg_match($regex, $filter) && substr($filter, 0, 6) != '{!tag=') {
$fq[$i] = $tag . $filter;
}
}
}
return $facet_params;
}
@ -1665,7 +1791,13 @@ class SearchApiSolrService extends SearchApiAbstractService {
// Extract filters
$fq = $this->createFilterQueries($query->getFilter(), $fields, $index->options['fields']);
$fq[] = 'index_id:' . $this->getIndexId($index->machine_name);
$index_id = $this->getIndexId($index->machine_name);
$fq[] = 'index_id:' . call_user_func(array($this->connection_class, 'phrase'), $index_id);
if (!empty($this->options['site_hash'])) {
// We don't need to escape the site hash, as that consists only of
// alphanumeric characters.
$fq[] = 'hash:' . search_api_solr_site_hash();
}
// Autocomplete magic
$facet_fields = array();
@ -1867,6 +1999,11 @@ class SearchApiSolrService extends SearchApiAbstractService {
$index_filter[] = 'index_id:' . call_user_func(array($this->connection_class, 'phrase'), $index_id);
}
$fq[] = implode(' OR ', $index_filter);
if (!empty($this->options['site_hash'])) {
// We don't need to escape the site hash, as that consists only of
// alphanumeric characters.
$fq[] = 'hash:' . search_api_solr_site_hash();
}
// Extract sort
$sort = array();
@ -2015,6 +2152,8 @@ class SearchApiSolrService extends SearchApiAbstractService {
}
}
drupal_alter('search_api_solr_multi_search_results', $results, $query, $response);
// Compute performance
$time_end = microtime(TRUE);
$results['performance'] = array(
@ -2163,4 +2302,5 @@ class SearchApiSolrService extends SearchApiAbstractService {
$id = variable_get('search_api_solr_index_prefix', '') . $id;
return $id;
}
}

View File

@ -182,11 +182,11 @@ class SearchApiSolrConnection implements SearchApiSolrConnectionInterface {
/**
* Flag that denotes whether to use soft commits for Solr 4.x.
*
* Defaults to FALSE.
* Defaults to TRUE.
*
* @var bool
*/
protected $soft_commit = FALSE;
protected $soft_commit = TRUE;
/**
* Implements SearchApiSolrConnectionInterface::__construct().
@ -384,6 +384,11 @@ class SearchApiSolrConnection implements SearchApiSolrConnectionInterface {
* Implements SearchApiSolrConnectionInterface::getSolrVersion().
*/
public function getSolrVersion() {
// Allow for overrides by the user.
if (!empty($this->options['solr_version'])) {
return $this->options['solr_version'];
}
$system_info = $this->getSystemInfo();
// Get our solr version number
if (isset($system_info->lucene->{'solr-spec-version'})) {
@ -856,7 +861,10 @@ class SearchApiSolrConnection implements SearchApiSolrConnectionInterface {
// Recurse into children.
if (is_array($value)) {
$params[] = $this->httpBuildQuery($value, $key);
$value = $this->httpBuildQuery($value, $key);
if ($value) {
$params[] = $value;
}
}
// If a query parameter value is NULL, only append its key.
elseif (!isset($value)) {
@ -882,10 +890,40 @@ class SearchApiSolrConnection implements SearchApiSolrConnectionInterface {
$params += array(
'json.nl' => self::NAMED_LIST_FORMAT,
);
if ($query) {
if (isset($query)) {
$params['q'] = $query;
}
// PHP's built-in http_build_query() doesn't give us the format Solr wants.
// Carry out some performance improvements when no search keys are given.
if (!isset($params['q']) || !strlen($params['q'])) {
// Without search keys, the qf parameter is useless. We also remove empty
// search keys here. (With our normal service class, empty keys won't be
// set, but another module using this connection class might do that.)
unset($params['q'], $params['qf']);
// If we have filters set (which will nearly always be the case, since we
// have to filter by index), move them to the q.alt parameter where
// possible.
if (!empty($params['fq'])) {
$qalt = array();
foreach ($params['fq'] as $i => $fq) {
// Tagged and negative filters cannot be moved to q.alt.
if ($fq[0] !== '{' && $fq[0] !== '-') {
$qalt[] = "($fq)";
unset($params['fq'][$i]);
}
}
if ($qalt) {
$params['q.alt'] = implode(' ', $qalt);
}
if (empty($params['fq'])) {
unset($params['fq']);
}
}
}
// Build the HTTP query string. We have our own method for that since PHP's
// built-in http_build_query() doesn't give us the format Solr wants.
$queryString = $this->httpBuildQuery($params);
if ($method == 'GET' || $method == 'AUTO') {

View File

@ -129,7 +129,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* The HTTP response object.
*
* @throws Exception
* @throws SearchApiException
*/
public function makeServletRequest($servlet, array $params = array(), array $options = array());
@ -164,7 +164,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call
*/
public function update($rawPost, $timeout = FALSE);
@ -185,7 +185,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function addDocuments(array $documents, $overwrite = NULL, $commitWithin = NULL);
@ -204,7 +204,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function commit($waitSearcher = TRUE, $timeout = 3600);
@ -221,7 +221,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function deleteById($id, $timeout = 3600);
@ -238,7 +238,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function deleteByMultipleIds(array $ids, $timeout = 3600);
@ -254,7 +254,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function deleteByQuery($rawQuery, $timeout = 3600);
@ -273,7 +273,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function optimize($waitSearcher = TRUE, $timeout = 3600);
@ -294,7 +294,7 @@ interface SearchApiSolrConnectionInterface {
* @return object
* A response object.
*
* @throws Exception
* @throws SearchApiException
* If an error occurs during the service call.
*/
public function search($query = NULL, array $params = array(), $method = 'GET');

View File

@ -100,6 +100,22 @@ function hook_search_api_solr_multi_query_alter(array &$call_args, SearchApiMult
}
}
/**
* Lets modules alter the search results returned from a multi-index search.
*
* @param array $results
* The results array that will be returned for the search.
* @param SearchApiMultiQueryInterface $query
* The executed multi-index search query.
* @param object $response
* The Solr response object.
*/
function hook_search_api_solr_multi_search_results_alter(array &$results, SearchApiMultiQueryInterface $query, $response) {
if (isset($response->facet_counts->facet_fields->custom_field)) {
// Do something with $results.
}
}
/**
* Provide Solr dynamic fields as Search API data types.
*

View File

@ -11,9 +11,9 @@ files[] = includes/solr_connection.interface.inc
files[] = includes/solr_field.inc
files[] = includes/spellcheck.inc
; Information added by Drupal.org packaging script on 2013-12-25
version = "7.x-1.4"
; Information added by Drupal.org packaging script on 2014-09-08
version = "7.x-1.6"
core = "7.x"
project = "search_api_solr"
datestamp = "1387970905"
datestamp = "1410186051"

View File

@ -63,6 +63,8 @@ function search_api_solr_uninstall() {
variable_del('search_api_solr_autocomplete_max_occurrences');
variable_del('search_api_solr_index_prefix');
variable_del('search_api_solr_http_get_max_length');
variable_del('search_api_solr_cron_action');
variable_del('search_api_solr_site_hash');
}
/**

View File

@ -71,17 +71,46 @@ function search_api_solr_help($path, array $arg = array()) {
* day.
*/
function search_api_solr_cron() {
if (REQUEST_TIME - variable_get('search_api_solr_last_optimize', 0) > 86400) {
$action = variable_get('search_api_solr_cron_action', 'spellcheck');
// We treat all unknown action settings as "none". However, we turn a blind
// eye for Britons and other people who can spell.
if (!in_array($action, array('spellcheck', 'optimize', 'optimise'))) {
return;
}
// 86400 seconds is one day. We use slightly less here to allow for some
// variation in the request time of the cron run, so that the time of day will
// (more or less) stay the same.
if (REQUEST_TIME - variable_get('search_api_solr_last_optimize', 0) > 86340) {
variable_set('search_api_solr_last_optimize', REQUEST_TIME);
$conditions = array('class' => 'search_api_solr_service', 'enabled' => TRUE);
$count = 0;
foreach (search_api_server_load_multiple(FALSE, $conditions) as $server) {
try {
$server->getSolrConnection()->optimize(FALSE);
$solr = $server->getSolrConnection();
if ($action != 'spellcheck') {
$solr->optimize(FALSE);
}
else {
$params['rows'] = 0;
$params['spellcheck'] = 'true';
$params['spellcheck.build'] = 'true';
$solr->search(NULL, $params);
}
++$count;
}
catch(Exception $e) {
catch(SearchApiException $e) {
watchdog_exception('search_api_solr', $e, '%type while optimizing Solr server @server: !message in %function (line %line of %file).', array('@server' => $server->name));
}
}
if ($count) {
$vars['@count'] = $count;
if ($action != 'spellcheck') {
watchdog('search_api_solr', 'Optimized @count Solr server(s).', $vars, WATCHDOG_INFO);
}
else {
watchdog('search_api_solr', 'Rebuilt spellcheck dictionary on @count Solr server(s).', $vars, WATCHDOG_INFO);
}
}
}
}
@ -206,6 +235,25 @@ function search_api_solr_get_data_type_info($type = NULL) {
return $types;
}
/**
* Returns a unique hash for the current site.
*
* This is used to identify Solr documents from different sites within a single
* Solr server.
*
* @return string
* A unique site hash, containing only alphanumeric characters.
*/
function search_api_solr_site_hash() {
// Copied from apachesolr_site_hash().
if (!($hash = variable_get('search_api_solr_site_hash', FALSE))) {
global $base_url;
$hash = substr(base_convert(sha1(uniqid($base_url, TRUE)), 16, 36), 0, 6);
variable_set('search_api_solr_site_hash', $hash);
}
return $hash;
}
/**
* Retrieves a list of all config files of a server.
*
@ -229,9 +277,17 @@ function search_api_solr_server_get_files(SearchApiServer $server, $dir_name = N
// Search for directories and recursively merge directory files.
$files_data = json_decode($response->data, TRUE);
$files_list = $files_data['files'];
$dir_length = strlen($dir_name) + 1;
$result = array('' => array());
foreach ($files_list as $file_name => $file_info) {
// Annoyingly, Solr 4.7 changed the way the admin/file handler returns
// the file names when listing directory contents: the returned name is now
// only the base name, not the complete path from the config root directory.
// We therefore have to check for this case.
if ($dir_name && substr($file_name, 0, $dir_length) !== "$dir_name/") {
$file_name = "$dir_name/" . $file_name;
}
if (empty($file_info['directory'])) {
$result[''][$file_name] = $file_info;
}
@ -315,3 +371,35 @@ function _search_api_solr_switch_to_clean_ids(array $form, array &$form_state) {
drupal_set_message($msg);
}
}
/**
* Switches a server to multi-site compatibility mode.
*
* Used as a submit callback in SearchApiSolrService::configurationForm().
*/
function _search_api_solr_switch_to_site_hash(array $form, array &$form_state) {
$server = $form_state['server'];
try {
$conditions['server'] = $server->machine_name;
$indexes = search_api_index_load_multiple(FALSE, $conditions);
if ($indexes) {
foreach ($indexes as $index) {
$index->reindex();
}
$msg = format_plural(count($indexes), '1 index was cleared.', '@count indexes were cleared.');
$server->deleteItems('index_id:(' . implode(' ', array_keys($indexes)) . ')');
drupal_set_message($msg);
}
}
catch (SearchApiException $e) {
$variables = array('@server' => $server->name);
watchdog_exception('search_api_solr', $e, '%type while attempting to enable multi-site compatibility mode for Solr server @server: !message in %function (line %line of %file).', $variables);
drupal_set_message(t('An error occured while attempting to enable multi-site compatibility mode for Solr server @server. Check the logs for details.', $variables), 'error');
return;
}
$server->options['site_hash'] = TRUE;
$server->save();
drupal_set_message(t('The Solr server was successfully switched to multi-site compatibility mode.'));
}

View File

@ -14,9 +14,16 @@
function search_api_solr_views_data_alter(array &$data) {
try {
foreach (search_api_index_load_multiple(FALSE) as $index) {
$server = $index->server();
$server = NULL;
try {
$server = $index->server();
}
catch (SearchApiException $e) {
// Just ignore invalid servers and skip the index.
}
if (!$server || empty($server->options['retrieve_data'])) {
return;
continue;
}
// Fill in base data.
$key = 'search_api_index_' . $index->machine_name;

View File

@ -1,31 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!--
This file allows you to boost certain search items to the top of search
results. You can find out an item's ID by searching directly on the Solr
server. The item IDs are in general constructed as follows:
Search API:
$document->id = $index_id . '-' . $item_id;
Apache Solr Search Integration:
$document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;
If you want this file to be automatically re-loaded when a Solr commit takes
place (e.g., if you have an automatic script active which updates elevate.xml
according to newly-indexed data), place it into Solr's data/ directory.
Otherwise, place it with the other configuration files into the conf/
directory.
See http://wiki.apache.org/solr/QueryElevationComponent for more information.
-->
<elevate>
<!-- Example for ranking the node #1 first in searches for "example query": -->
<!--
<query text="example query">
<doc id="default_node_index-1" />
<doc id="7v3jsc/node/1" />
</query>
-->
<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->
<!-- <query text="...">...</query> -->
</elevate>

View File

@ -1,14 +0,0 @@
# This file contains character mappings for the default fulltext field type.
# The source characters (on the left) will be replaced by the respective target
# characters before any other processing takes place.
# Lines starting with a pound character # are ignored.
#
# For sensible defaults, use the mapping-ISOLatin1Accent.txt file distributed
# with the example application of your Solr version.
#
# Examples:
# "À" => "A"
# "\u00c4" => "A"
# "\u00c4" => "\u0041"
# "æ" => "ae"
# "\n" => " "

View File

@ -1,7 +0,0 @@
#-----------------------------------------------------------------------
# This file blocks words from being operated on by the stemmer and word delimiter.
&amp;
&lt;
&gt;
&#039;
&quot;

View File

@ -1,535 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!--
This is the Solr schema file. This file should be named "schema.xml" and
should be in the conf directory under the solr home
(i.e. ./solr/conf/schema.xml by default)
or located where the classloader for the Solr webapp can find it.
For more information, on how to customize this file, please see
http://wiki.apache.org/solr/SchemaXml
-->
<schema name="drupal-4.1-solr-1.4" version="1.2">
<!-- attribute "name" is the name of this schema and is only used for display purposes.
Applications should change this to reflect the nature of the search collection.
version="1.2" is Solr's version number for the schema syntax and semantics. It should
not normally be changed by applications.
1.0: multiValued attribute did not exist, all fields are multiValued by nature
1.1: multiValued attribute introduced, false by default
1.2: omitTermFreqAndPositions attribute introduced, true by default except for text fields.
-->
<types>
<!-- field type definitions. The "name" attribute is
just a label to be used by field definitions. The "class"
attribute and any other attributes determine the real
behavior of the fieldType.
Class names starting with "solr" refer to java classes in the
org.apache.solr.analysis package.
-->
<!-- The StrField type is not analyzed, but indexed/stored verbatim.
- StrField and TextField support an optional compressThreshold which
limits compression (if enabled in the derived fields) to values which
exceed a certain size (in characters).
-->
<fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/>
<!-- boolean type: "true" or "false" -->
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" omitNorms="true"/>
<!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
<fieldtype name="binary" class="solr.BinaryField"/>
<!-- The optional sortMissingLast and sortMissingFirst attributes are
currently supported on types that are sorted internally as strings.
- If sortMissingLast="true", then a sort on this field will cause documents
without the field to come after documents with the field,
regardless of the requested sort order (asc or desc).
- If sortMissingFirst="true", then a sort on this field will cause documents
without the field to come before documents with the field,
regardless of the requested sort order.
- If sortMissingLast="false" and sortMissingFirst="false" (the default),
then default lucene sorting will be used which places docs without the
field first in an ascending sort and last in a descending sort.
-->
<!-- numeric field types that can be sorted, but are not optimized for range queries -->
<fieldType name="integer" class="solr.TrieIntField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="float" class="solr.TrieFloatField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="long" class="solr.TrieLongField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/>
<!--
Note:
These should only be used for compatibility with existing indexes (created with older Solr versions)
or if "sortMissingFirst" or "sortMissingLast" functionality is needed. Use Trie based fields instead.
Numeric field types that manipulate the value into
a string value that isn't human-readable in its internal form,
but with a lexicographic ordering the same as the numeric ordering,
so that range queries work correctly.
-->
<fieldType name="sint" class="solr.TrieIntField" sortMissingLast="true" omitNorms="true"/>
<fieldType name="sfloat" class="solr.TrieFloatField" sortMissingLast="true" omitNorms="true"/>
<fieldType name="slong" class="solr.TrieLongField" sortMissingLast="true" omitNorms="true"/>
<fieldType name="sdouble" class="solr.TrieDoubleField" sortMissingLast="true" omitNorms="true"/>
<!--
Numeric field types that index each value at various levels of precision
to accelerate range queries when the number of values between the range
endpoints is large. See the javadoc for NumericRangeQuery for internal
implementation details.
Smaller precisionStep values (specified in bits) will lead to more tokens
indexed per value, slightly larger index size, and faster range queries.
A precisionStep of 0 disables indexing at different precision levels.
-->
<fieldType name="tint" class="solr.TrieIntField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
<fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" omitNorms="true" positionIncrementGap="0"/>
<!--
The ExternalFileField type gets values from an external file instead of the
index. This is useful for data such as rankings that might change frequently
and require different update frequencies than the documents they are
associated with.
-->
<fieldType name="pfloat" class="solr.FloatField" omitNorms="true"/>
<fieldType name="file" keyField="id" defVal="1" stored="false" indexed="false" class="solr.ExternalFileField" valType="pfloat"/>
<!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
is a more restricted form of the canonical representation of dateTime
http://www.w3.org/TR/xmlschema-2/#dateTime
The trailing "Z" designates UTC time and is mandatory.
Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
All other components are mandatory.
Expressions can also be used to denote calculations that should be
performed relative to "NOW" to determine the value, ie...
NOW/HOUR
... Round to the start of the current hour
NOW-1DAY
... Exactly 1 day prior to now
NOW/DAY+6MONTHS+3DAYS
... 6 months and 3 days in the future from the start of
the current day
Consult the DateField javadocs for more information.
-->
<fieldType name="date" class="solr.DateField" sortMissingLast="true" omitNorms="true"/>
<!-- A Trie based date field for faster date range queries and date faceting. -->
<fieldType name="tdate" class="solr.TrieDateField" omitNorms="true" precisionStep="6" positionIncrementGap="0"/>
<!-- solr.TextField allows the specification of custom text analyzers
specified as a tokenizer and a list of token filters. Different
analyzers may be specified for indexing and querying.
The optional positionIncrementGap puts space between multiple fields of
this type on the same document, with the purpose of preventing false phrase
matching across fields.
For more info on customizing your analyzer chain, please see
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-->
<!-- One can also specify an existing Analyzer class that has a
default constructor via the class attribute on the analyzer element
<fieldType name="text_greek" class="solr.TextField">
<analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
</fieldType>
-->
<!-- A text field that only splits on whitespace for exact matching of words -->
<fieldType name="text_ws" class="solr.TextField" omitNorms="true" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
<!-- A text field that uses WordDelimiterFilter to enable splitting and matching of
words on case-change, alpha numeric boundaries, and non-alphanumeric chars,
so that a query of "wifi" or "wi fi" could match a document containing "Wi-Fi".
Synonyms and stopwords are customized by external files, and stemming is enabled.
Duplicate tokens at the same position (which may result from Stemmed Synonyms or
WordDelim parts) are removed.
-->
<fieldType name="text" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- Case insensitive stop word removal.
add enablePositionIncrements=true in both the index and query
analyzers to leave a 'gap' for more accurate phrase queries.
-->
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="stopwords.txt"
enablePositionIncrements="true"
/>
<filter class="solr.WordDelimiterFilterFactory"
protected="protwords.txt"
generateWordParts="1"
generateNumberParts="1"
catenateWords="1"
catenateNumbers="1"
catenateAll="0"
splitOnCaseChange="1"
preserveOriginal="1"/>
<filter class="solr.LengthFilterFactory" min="2" max="100" />
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
<analyzer type="query">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="stopwords.txt"
enablePositionIncrements="true"
/>
<filter class="solr.WordDelimiterFilterFactory"
protected="protwords.txt"
generateWordParts="1"
generateNumberParts="1"
catenateWords="0"
catenateNumbers="0"
catenateAll="0"
splitOnCaseChange="1"
preserveOriginal="1"/>
<filter class="solr.LengthFilterFactory" min="2" max="100" />
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType>
<!-- An unstemmed text field - good if one does not know the language of the field -->
<fieldType name="text_und" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
<filter class="solr.WordDelimiterFilterFactory"
protected="protwords.txt"
generateWordParts="1"
generateNumberParts="1"
catenateWords="1"
catenateNumbers="1"
catenateAll="0"
splitOnCaseChange="0"/>
<filter class="solr.LengthFilterFactory" min="2" max="100" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="stopwords.txt"
enablePositionIncrements="true"
/>
<filter class="solr.WordDelimiterFilterFactory"
protected="protwords.txt"
generateWordParts="1"
generateNumberParts="1"
catenateWords="0"
catenateNumbers="0"
catenateAll="0"
splitOnCaseChange="0"/>
<filter class="solr.LengthFilterFactory" min="2" max="100" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
<!-- Edge N gram type - for example for matching against queries with results
KeywordTokenizer leaves input string intact as a single term.
see: http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
-->
<fieldType name="edge_n2_kw_text" class="solr.TextField" omitNorms="true" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.KeywordTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="25" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.KeywordTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
<!-- Setup simple analysis for spell checking -->
<fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
<filter class="solr.LengthFilterFactory" min="4" max="20" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.RemoveDuplicatesTokenFilterFactory" />
</analyzer>
</fieldType>
<!-- This is an example of using the KeywordTokenizer along
With various TokenFilterFactories to produce a sortable field
that does not include some properties of the source text
-->
<fieldType name="sortString" class="solr.TextField" sortMissingLast="true" omitNorms="true">
<analyzer>
<!-- KeywordTokenizer does no actual tokenizing, so the entire
input string is preserved as a single token
-->
<tokenizer class="solr.KeywordTokenizerFactory"/>
<!-- The LowerCase TokenFilter does what you expect, which can be
when you want your sorting to be case insensitive
-->
<filter class="solr.LowerCaseFilterFactory" />
<!-- The TrimFilter removes any leading or trailing whitespace -->
<filter class="solr.TrimFilterFactory" />
<!-- The PatternReplaceFilter gives you the flexibility to use
Java Regular expression to replace any sequence of characters
matching a pattern with an arbitrary replacement string,
which may include back refrences to portions of the orriginal
string matched by the pattern.
See the Java Regular Expression documentation for more
infomation on pattern and replacement string syntax.
http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/package-summary.html
<filter class="solr.PatternReplaceFilterFactory"
pattern="(^\p{Punct}+)" replacement="" replace="all"
/>
-->
</analyzer>
</fieldType>
<!-- A random sort type -->
<fieldType name="rand" class="solr.RandomSortField" indexed="true" />
<!-- since fields of this type are by default not stored or indexed, any data added to
them will be ignored outright
-->
<fieldtype name="ignored" stored="false" indexed="false" class="solr.StrField" />
</types>
<!-- Following is a dynamic way to include other types, added by other contrib modules -->
<xi:include href="solr/conf/schema_extra_types.xml" xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:fallback></xi:fallback>
</xi:include>
<fields>
<!-- Valid attributes for fields:
name: mandatory - the name for the field
type: mandatory - the name of a previously defined type from the <types> section
indexed: true if this field should be indexed (searchable or sortable)
stored: true if this field should be retrievable
compressed: [false] if this field should be stored using gzip compression
(this will only apply if the field type is compressable; among
the standard field types, only TextField and StrField are)
multiValued: true if this field may contain multiple values per document
omitNorms: (expert) set to true to omit the norms associated with
this field (this disables length normalization and index-time
boosting for the field, and saves some memory). Only full-text
fields or fields that need an index-time boost need norms.
-->
<!-- The document id is usually derived from a site-spcific key (hash) and the
entity type and ID like:
Search Api :
The format used is $document->id = $index_id . '-' . $item_id
Apache Solr Search Integration
The format used is $document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;
-->
<field name="id" type="string" indexed="true" stored="true" required="true" />
<!-- Search Api specific fields -->
<!-- item_id contains the entity ID, e.g. a node's nid. -->
<field name="item_id" type="string" indexed="true" stored="true" />
<!-- index_id is the machine name of the search index this entry belongs to. -->
<field name="index_id" type="string" indexed="true" stored="true" />
<!-- Since sorting by ID is explicitly allowed, store item_id also in a sortable way. -->
<copyField source="item_id" dest="sort_search_api_id" />
<!-- Apache Solr Search Integration specific fields -->
<!-- entity_id is the numeric object ID, e.g. Node ID, File ID -->
<field name="entity_id" type="long" indexed="true" stored="true" />
<!-- entity_type is 'node', 'file', 'user', or some other Drupal object type -->
<field name="entity_type" type="string" indexed="true" stored="true" />
<!-- bundle is a node type, or as appropriate for other entity types -->
<field name="bundle" type="string" indexed="true" stored="true"/>
<field name="bundle_name" type="string" indexed="true" stored="true"/>
<field name="site" type="string" indexed="true" stored="true"/>
<field name="hash" type="string" indexed="true" stored="true"/>
<field name="url" type="string" indexed="true" stored="true"/>
<!-- label is the default field for a human-readable string for this entity (e.g. the title of a node) -->
<field name="label" type="text" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<!-- The string version of the title is used for sorting -->
<copyField source="label" dest="sort_label"/>
<!-- content is the default field for full text search - dump crap here -->
<field name="content" type="text" indexed="true" stored="true" termVectors="true"/>
<field name="teaser" type="text" indexed="false" stored="true"/>
<field name="path" type="string" indexed="true" stored="true"/>
<field name="path_alias" type="text" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<!-- These are the fields that correspond to a Drupal node. The beauty of having
Lucene store title, body, type, etc., is that we retrieve them with the search
result set and don't need to go to the database with a node_load. -->
<field name="tid" type="long" indexed="true" stored="true" multiValued="true"/>
<field name="taxonomy_names" type="text" indexed="true" stored="false" termVectors="true" multiValued="true" omitNorms="true"/>
<!-- Copy terms to a single field that contains all taxonomy term names -->
<copyField source="tm_vid_*" dest="taxonomy_names"/>
<!-- Here, default is used to create a "timestamp" field indicating
when each document was indexed.-->
<field name="timestamp" type="tdate" indexed="true" stored="true" default="NOW" multiValued="false"/>
<!-- This field is used to build the spellchecker index -->
<field name="spell" type="textSpell" indexed="true" stored="true" multiValued="true"/>
<!-- copyField commands copy one field to another at the time a document
is added to the index. It's used either to index the same field differently,
or to add multiple fields to the same field for easier/faster searching. -->
<copyField source="label" dest="spell"/>
<copyField source="content" dest="spell"/>
<copyField source="ts_*" dest="spell"/>
<copyField source="tm_*" dest="spell"/>
<!-- Dynamic field definitions. If a field name is not found, dynamicFields
will be used if the name matches any of the patterns.
RESTRICTION: the glob-like pattern in the name attribute must have
a "*" only at the start or the end.
EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i)
Longer patterns will be matched first. if equal size patterns
both match, the first appearing in the schema will be used. -->
<!-- A set of fields to contain text extracted from HTML tag contents which we
can boost at query time. -->
<dynamicField name="tags_*" type="text" indexed="true" stored="false" omitNorms="true"/>
<!-- For 2 and 3 letter prefix dynamic fields, the 1st letter indicates the data type and
the last letter is 's' for single valued, 'm' for multi-valued -->
<!-- We use long for integer since 64 bit ints are now common in PHP. -->
<dynamicField name="is_*" type="long" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="im_*" type="long" indexed="true" stored="true" multiValued="true"/>
<!-- List of floats can be saved in a regular float field -->
<dynamicField name="fs_*" type="float" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="fm_*" type="float" indexed="true" stored="true" multiValued="true"/>
<!-- List of doubles can be saved in a regular double field -->
<dynamicField name="ps_*" type="double" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="pm_*" type="double" indexed="true" stored="true" multiValued="true"/>
<!-- List of booleans can be saved in a regular boolean field -->
<dynamicField name="bm_*" type="boolean" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="bs_*" type="boolean" indexed="true" stored="true" multiValued="false"/>
<!-- Regular text (without processing) can be stored in a string field-->
<dynamicField name="ss_*" type="string" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="sm_*" type="string" indexed="true" stored="true" multiValued="true"/>
<!-- Normal text fields are for full text - the relevance of a match depends on the length of the text -->
<dynamicField name="ts_*" type="text" indexed="true" stored="true" multiValued="false" termVectors="true"/>
<dynamicField name="tm_*" type="text" indexed="true" stored="true" multiValued="true" termVectors="true"/>
<!-- Unstemmed text fields for full text - the relevance of a match depends on the length of the text -->
<dynamicField name="tus_*" type="text_und" indexed="true" stored="true" multiValued="false" termVectors="true"/>
<dynamicField name="tum_*" type="text_und" indexed="true" stored="true" multiValued="true" termVectors="true"/>
<!-- These text fields omit norms - useful for extracted text like taxonomy_names -->
<dynamicField name="tos_*" type="text" indexed="true" stored="true" multiValued="false" termVectors="true" omitNorms="true"/>
<dynamicField name="tom_*" type="text" indexed="true" stored="true" multiValued="true" termVectors="true" omitNorms="true"/>
<!-- Special-purpose text fields -->
<dynamicField name="tes_*" type="edge_n2_kw_text" indexed="true" stored="true" multiValued="false" omitTermFreqAndPositions="true" />
<dynamicField name="tem_*" type="edge_n2_kw_text" indexed="true" stored="true" multiValued="true" omitTermFreqAndPositions="true" />
<dynamicField name="tws_*" type="text_ws" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="twm_*" type="text_ws" indexed="true" stored="true" multiValued="true"/>
<!-- trie dates are preferred, so give them the 2 letter prefix -->
<dynamicField name="ds_*" type="tdate" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="dm_*" type="tdate" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="its_*" type="tlong" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="itm_*" type="tlong" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="fts_*" type="tfloat" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="ftm_*" type="tfloat" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="pts_*" type="tdouble" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="ptm_*" type="tdouble" indexed="true" stored="true" multiValued="true"/>
<!-- Binary fields can be populated using base64 encoded data. Useful e.g. for embedding
a small image in a search result using the data URI scheme -->
<dynamicField name="xs_*" type="binary" indexed="false" stored="true" multiValued="false"/>
<dynamicField name="xm_*" type="binary" indexed="false" stored="true" multiValued="true"/>
<!-- In rare cases a date rather than tdate is needed for sortMissingLast -->
<dynamicField name="dds_*" type="date" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="ddm_*" type="date" indexed="true" stored="true" multiValued="true"/>
<!-- Sortable fields, good for sortMissingLast support &
We use long for integer since 64 bit ints are now common in PHP. -->
<dynamicField name="iss_*" type="slong" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="ism_*" type="slong" indexed="true" stored="true" multiValued="true"/>
<!-- In rare cases a sfloat rather than tfloat is needed for sortMissingLast -->
<dynamicField name="fss_*" type="sfloat" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="fsm_*" type="sfloat" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="pss_*" type="sdouble" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="psm_*" type="sdouble" indexed="true" stored="true" multiValued="true"/>
<!-- In case a 32 bit int is really needed, we provide these fields. 'h' is mnemonic for 'half word', i.e. 32 bit on 64 arch -->
<dynamicField name="hs_*" type="integer" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="hm_*" type="integer" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="hss_*" type="sint" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="hsm_*" type="sint" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="hts_*" type="tint" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="htm_*" type="tint" indexed="true" stored="true" multiValued="true"/>
<!-- Unindexed string fields that can be used to store values that won't be searchable -->
<dynamicField name="zs_*" type="string" indexed="false" stored="true" multiValued="false"/>
<dynamicField name="zm_*" type="string" indexed="false" stored="true" multiValued="true"/>
<!-- Begin compatibility code for added fields in Solr 3.4+
http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function -->
<dynamicField name="points_*" type="string" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="pointm_*" type="string" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="locs_*" type="string" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="locm_*" type="string" indexed="true" stored="true" multiValued="true"/>
<dynamicField name="geos_*" type="string" indexed="true" stored="true" multiValued="false"/>
<dynamicField name="geom_*" type="string" indexed="true" stored="true" multiValued="true"/>
<!-- External file fields -->
<dynamicField name="eff_*" type="string"/>
<!-- End compatibility code -->
<!-- Sortable version of the dynamic string field -->
<dynamicField name="sort_*" type="sortString" indexed="true" stored="false"/>
<copyField source="ss_*" dest="sort_*"/>
<!-- A random sort field -->
<dynamicField name="random_*" type="rand" indexed="true" stored="true"/>
<!-- This field is used to store access information (e.g. node access grants), as opposed to field data -->
<dynamicField name="access_*" type="integer" indexed="true" stored="false" multiValued="true"/>
<!-- The following causes solr to ignore any fields that don't already match an existing
field name or dynamic field, rather than reporting them as an error.
Alternately, change the type="ignored" to some other type e.g. "text" if you want
unknown fields indexed and/or stored by default -->
<dynamicField name="*" type="ignored" multiValued="true" />
</fields>
<!-- Following is a dynamic way to include other fields, added by other contrib modules -->
<xi:include href="solr/conf/schema_extra_fields.xml" xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:fallback></xi:fallback>
</xi:include>
<!-- Field to use to determine and enforce document uniqueness.
Unless this field is marked with required="false", it will be a required field
-->
<uniqueKey>id</uniqueKey>
<!-- field for the QueryParser to use when an explicit fieldname is absent -->
<defaultSearchField>content</defaultSearchField>
<!-- SolrQueryParser configuration: defaultOperator="AND|OR" -->
<solrQueryParser defaultOperator="AND"/>
</schema>

View File

@ -1,23 +0,0 @@
<fields>
<!--
Adding German dynamic field types to our Solr Schema
If you enable this, make sure you have a folder called lang with stopwords_de.txt
and synonyms_de.txt in there
This also requires to enable the content in schema_extra_types.xml
-->
<!--
<field name="label_de" type="text_de" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<field name="content_de" type="text_de" indexed="true" stored="true" termVectors="true"/>
<field name="teaser_de" type="text_de" indexed="false" stored="true"/>
<field name="path_alias_de" type="text_de" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<field name="taxonomy_names_de" type="text_de" indexed="true" stored="false" termVectors="true" multiValued="true" omitNorms="true"/>
<field name="spell_de" type="text_de" indexed="true" stored="true" multiValued="true"/>
<copyField source="label_de" dest="spell_de"/>
<copyField source="content_de" dest="spell_de"/>
<dynamicField name="tags_de_*" type="text_de" indexed="true" stored="false" omitNorms="true"/>
<dynamicField name="ts_de_*" type="text_de" indexed="true" stored="true" multiValued="false" termVectors="true"/>
<dynamicField name="tm_de_*" type="text_de" indexed="true" stored="true" multiValued="true" termVectors="true"/>
<dynamicField name="tos_de_*" type="text_de" indexed="true" stored="true" multiValued="false" termVectors="true" omitNorms="true"/>
<dynamicField name="tom_de_*" type="text_de" indexed="true" stored="true" multiValued="true" termVectors="true" omitNorms="true"/>
-->
</fields>

View File

@ -1,30 +0,0 @@
<types>
<!--
Adding German language to our Solr Schema German
If you enable this, make sure you have a folder called lang with stopwords_de.txt
and synonyms_de.txt in there
-->
<!--
<fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.StopFilterFactory" words="lang/stopwords_de.txt" format="snowball" ignoreCase="true" enablePositionIncrements="true"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" splitOnNumerics="1" catenateWords="1" catenateNumbers="1" catenateAll="0" protected="protwords.txt" preserveOriginal="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.GermanLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
<analyzer type="query">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="lang/synonyms_de.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory" words="lang/stopwords_de.txt" format="snowball" ignoreCase="true" enablePositionIncrements="true"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" splitOnNumerics="1" catenateWords="0" catenateNumbers="0" catenateAll="0" protected="protwords.txt" preserveOriginal="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.GermanLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType>
-->
</types>

View File

@ -1,80 +0,0 @@
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
<str name="queryAnalyzerFieldType">textSpell</str>
<!-- Multiple "Spell Checkers" can be declared and used by this
component
-->
<!-- a spellchecker built from a field of the main index, and
written to disk
-->
<lst name="spellchecker">
<str name="name">default</str>
<str name="field">spell</str>
<str name="spellcheckIndexDir">spellchecker</str>
<str name="buildOnOptimize">true</str>
<!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!--
Adding German spellhecker index to our Solr index
This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml
-->
<!--
<lst name="spellchecker">
<str name="name">spellchecker_de</str>
<str name="field">spell_de</str>
<str name="spellcheckIndexDir">./spellchecker_de</str>
<str name="buildOnOptimize">true</str>
</lst>
-->
<!-- a spellchecker that uses a different distance measure -->
<!--
<lst name="spellchecker">
<str name="name">jarowinkler</str>
<str name="field">spell</str>
<str name="distanceMeasure">
org.apache.lucene.search.spell.JaroWinklerDistance
</str>
<str name="spellcheckIndexDir">spellcheckerJaro</str>
</lst>
-->
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
2. freq (Frequency first, then score)
3. A fully qualified class name
-->
<!--
<lst name="spellchecker">
<str name="name">freq</str>
<str name="field">lowerfilt</str>
<str name="spellcheckIndexDir">spellcheckerFreq</str>
<str name="comparatorClass">freq</str>
<str name="buildOnCommit">true</str>
-->
<!-- A spellchecker that reads the list of words from a file -->
<!--
<lst name="spellchecker">
<str name="classname">solr.FileBasedSpellChecker</str>
<str name="name">file</str>
<str name="sourceLocation">spellings.txt</str>
<str name="characterEncoding">UTF-8</str>
<str name="spellcheckIndexDir">spellcheckerFile</str>
</lst>
-->
</searchComponent>

View File

@ -1,10 +0,0 @@
# Defines Solr properties for this specific core.
solr.replication.master=false
solr.replication.slave=false
solr.replication.pollInterval=00:00:60
solr.replication.masterUrl=http://localhost:8983/solr
solr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml
solr.mlt.timeAllowed=2000
solr.pinkPony.timeAllowed=-1
solr.autoCommit.MaxDocs=10000
solr.autoCommit.MaxTime=120000

View File

@ -1,4 +0,0 @@
# Contains words which shouldn't be indexed for fulltext fields, e.g., because
# they're to common. For documentation of the format, see
# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory
# (Lines starting with a pound character # are ignored.)

View File

@ -1,3 +0,0 @@
# Contains synonyms to use for your index. For the format used, see
# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
# (Lines starting with a pound character # are ignored.)