In SugarCRM 8.0.1, I am trying to schedule a full system index, but I am running into the following error in the SugarCRM error log:
[FATAL] Elasticsearch request failure: Limit of total fields [1000] in index [my_index] has been exceeded
I am connected to a service running Elasticsearch 5.6.9 on another machine. There, I ran the following command to increase the service's field limit:
curl -XPUT 'localhost:9200/my_index/_settings' -H 'Content-Type: application/json' -d'
{
"index.mapping.total_fields.limit": 10000
}
'
I know that this command is sticking because in return after running it I get this:
{"acknowledged":true}
And if I run the following...
`curl -XGET 'localhost:9200/my_index/_settings?pretty'`
...this is part of what is returned:
"my_index" : {
"settings" : {
"index" : {
"mapping" : {
"total_fields" : {
"limit" : "10000"
},
All looks well, but I am still getting the same error back in my SugarCRM error log after trying to run the full system index again.
Are there any steps I am missing to make sure SugarCRM recognizes the new field limit? I have tried running a quick repair & rebuild and refreshing my cache in Sugar, but to no avail.
I also know that restarting my Elasticsearch service is no use because the PUT request I am running to increase the new field limit is only for the currently running service; if I restart it, I'd have to put in the PUT request again, so that would be redundant.