A newer version of Hazelcast Platform is available.

View latest

Upgrading from IMDG 3.12.x

This section lists the distribution, documentation and API changes for you to be aware when you have been using IMDG 3.12.x and want to use Hazelcast Platform.

See also Upgrading from IMDG and Jet Version 4.x to learn about the changes that need to be considered while upgrading from IMDG 4.x to Hazelcast Platform.
Hazelcast offers tools and features for a smooth migration from 3.12 to Platform 5.0. See Migrating Data from IMDG 3.12.x.

Hazelcast Platform is a major version release. Major releases allow us to break compatibility in the wire protocols and API, as well as removing the previously deprecated API.

As breaking changes have been made to the client and cluster member protocols, it is not possible to perform any in-place or rolling upgrade from a running IMDG 3.12.x cluster to Platform 5.0. The only way to upgrade to Platform 5.0 is to completely shutdown the cluster.

Removal of Hazelcast Client Module

  • The hazelcast-client module has been merged into the core module: All the classes in the hazelcast-client module have been moved to hazelcast. hazelcast-client.jar will not be created anymore.

  • Also the com.hazelcast.client Java module is not used anymore. All classes are now available within the com.hazelcast.core module.

JCache default Caching Provider

The default CachingProvider is the client-side CachingProvider. In order to select the member-side CachingProvider, you can specify the member-side CachingProvider by defining the Hazelcast property hazelcast.jcache.provider.type. See the Configuring JCache Provider section for more details.

Removal of User Defined Services

The public SPI (Service Provider Interface) which was known as User Defined Service has been removed. It was not simple enough and backwards compatibility was broken. A new and clearly defined SPI may be developed in the future if there is enough interest. The removed SPI’s classes will be kept to be used internally.

Changes in Client Connection Retry Mechanism

  • The connection-attempt-period and connection-attempt-limit configuration have been removed. Instead, the elements of connection-retry are now used. See Configuring Client Connection Retry for the usage of those new elements.

Increasing the Member/Client Thread Counts

If there are 20 or more processors detected, the Hazelcast member by default starts 4+4 (4 input and 4 output) I/O threads. This is to increase out of the box performance on faster machines because often (especially the cache with caching situations) the performance is I/O bound and having some extra cores available for I/O can make a significant difference. If less than 20 cores are detected, 3+3 IO threads are used and the behavior remains the same as Hazelcast IMDG 3.x series.

A smart client, by default, gets 3+3 (3 input and 3 output) I/O threads to speed up the performance. Previously, this was 1+1. However, the client I/O can become a bottleneck with too few threads. If TLS/SSL is enabled, then by default a smart client makes use of 3+3 I/O threads which was already the case with previous versions.

There has been a new performance feature since IMDG 4.0 called thread overcommit. By default, Hazelcast creates more threads than it has cores, e.g., on a 20 cores machine it creates 28 threads; 20 threads for the partition operations and 4+4 threads for I/O. In case of a typical caching usage (get/put/set, etc.) having too many threads can cause a performance degradation due to increased context switching. So there is a new option called hazelcast.operation.thread.overcommit. If this property is set to true, i.e., -Dhazelcast.operation.thread.overcommit=true, which is the default, Hazelcast uses the old style thread configuration where there are more threads than cores. If set to false, the number of partition threads plus the I/O threads will be equal to the core count. It depends on the environment if this gives a performance boost or not. In some environments it can give a significant boost and in some it will give a significant loss; it is best to benchmark for your specific situation. If you are doing lots of queries or other tasks which are CPU-bound, e.g., aggregations, you probably want to have as many cores available to partition operations as possible.

See Threading Model for more information on Hazelcast’s threading model.

Optimizing for Single Threaded Usages

A write-through optimization has been performed. This helps to reduce the latency in case of single threaded usages.

Normally, when a request is made, the request is handed over the I/O system where an I/O thread takes care of sending it over the wire. This is great for throughput, but in case of single threaded setups, it adds to the latency and therefore it reduces the throughput because threads need to be notified.

Hazelcast now detects the single threaded usage and tries to write through to the socket directly instead of handing it over to the I/O thread; this optimization is called "write-through".

This technique is being applied on the client, but also on the member. We have something similar when responses are received: normally a response is processed by the response thread, but in case of a single threaded usage, the response is processed on the I/O thread so we can remove a thread notification and therefore get higher throughput.

Both the write-through and response-through are enabled by default. If Hazelcast detects that there are many active threads, response- and write-through are disabled so it won’t cause a performance degradation.

Removing Deprecated Client Configurations

The following methods of ClientConfig have been refactored:

  • addNearCacheConfig(String, NearCacheConfig)addNearCacheConfig(NearCacheConfig)

  • setSmartRouting(boolean)getNetworkConfig().setSmartRouting(boolean);

  • getSocketInterceptorConfig()getNetworkConfig().getSocketInterceptorConfig();

  • setSocketInterceptorConfig(SocketInterceptorConfig)getNetworkConfig().setSocketInterceptorConfig(SocketInterceptorConfig);

  • getConnectionTimeout()getNetworkConfig().getConnectionTimeout();

  • setConnectionTimeout(int)getNetworkConfig().setConnectionTimeout(int);

  • addAddress(String)getNetworkConfig().addAddress(String);

  • getAddresses()getNetworkConfig().getAddresses();

  • setAddresses(List)getNetworkConfig().setAddresses(List);

  • isRedoOperation()getNetworkConfig().isRedoOperation();

  • setRedoOperation(boolean)getNetworkConfig().setRedoOperation(boolean);

  • getSocketOptions()getNetworkConfig().getSocketOptions();

  • setSocketOptions()getNetworkConfig().setSocketOptions(SocketOptions);

  • setSocketOptions()getNetworkConfig().setSocketOptions(SocketOptions);

  • getNetworkConfig().setAwsConfig(new ClientAwsConfig());getNetworkConfig().setAwsConfig(new AwsConfig());

Also the ClientAwsConfig class has been renamed as AwsConfig

The naming for the declarative configuration elements have not been changed.

See the following table for the before/after configuration samples.

3.12.x

5.0

Adding Near Cache

ClientConfig clientConfig = new ClientConfig();
clientConfig.addNearCacheConfig("myCache", new NearCacheConfig());
ClientConfig clientConfig = new ClientConfig();
NearCacheConfig nearCacheConfig = new NearCacheConfig("myCache");
clientConfig.addNearCacheConfig(nearCacheConfig);

Programmatic Configuration

ClientConfig clientConfig = new ClientConfig();
            clientConfig.setSmartRouting(true);
            clientConfig.isSmartRouting();
            clientConfig.getSocketInterceptorConfig();
            clientConfig.setSocketInterceptorConfig(new SocketInterceptorConfig());
            clientConfig.getConnectionTimeout();
            clientConfig.setConnectionTimeout(1000);
            clientConfig.addAddress("127.0.0.1:5701");
            clientConfig.getAddresses();
            clientConfig.setAddresses(Collections.singletonList("127.0.0.1:5701"));
            clientConfig.isRedoOperation();
            clientConfig.setRedoOperation(true);
            clientConfig.getSocketOptions();
            clientConfig.setSocketOptions(new SocketOptions());
            clientConfig.getNetworkConfig().setAwsConfig(new ClientAwsConfig());
            ClientAwsConfig awsConfig = clientConfig.getNetworkConfig().getAwsConfig();
        }
ClientConfig clientConfig = new ClientConfig();
            clientConfig.getNetworkConfig().setSmartRouting(true);
            clientConfig.getNetworkConfig().isSmartRouting();
            clientConfig.getNetworkConfig().getSocketInterceptorConfig();
            clientConfig.getNetworkConfig().setSocketInterceptorConfig(new SocketInterceptorConfig());
            clientConfig.getNetworkConfig().getConnectionTimeout();
            clientConfig.getNetworkConfig().setConnectionTimeout(1000);
            clientConfig.getNetworkConfig().addAddress("127.0.0.1:5701");
            clientConfig.getNetworkConfig().getAddresses();
            clientConfig.getNetworkConfig().setAddresses(Collections.singletonList("127.0.0.1:5701"));
            clientConfig.getNetworkConfig().isRedoOperation();
            clientConfig.getNetworkConfig().setRedoOperation(true);
            clientConfig.getNetworkConfig().getSocketOptions();
            clientConfig.getNetworkConfig().setSocketOptions(new SocketOptions());
            clientConfig.getNetworkConfig().setAwsConfig(new AwsConfig());
            AwsConfig awsConfig = clientConfig.getNetworkConfig().getAwsConfig();
        }

Changes in Index Configuration

In order to support further extensibility of Hazelcast, index configuration has been refactored.

Index type is now defined through the IndexType enumeration instead of the boolean flag: ordered index is now referred to as IndexType.SORTED, unordered as IndexType.HASH.

In composite indexes, index parts are now defined as a list of strings instead of a single string with comma-separated values.

With these changes, the following configuration parameters have been renamed:

Programmatic configuration objects and methods:

  • MapIndexConfigIndexConfig

  • MapConfig.getMapIndexConfigMapConfig.getIndexConfig

  • MapConfig.setMapIndexConfigMapConfig.setIndexConfig

  • MapConfig.addMapIndexConfigMapConfig.addIndexConfig

  • IMap.addIndex(String, boolean)IMap.addIndex(IndexConfig)

See the following table for the before/after samples.

3.12.x

5.0

Programmatic Configuration

MapIndexConfig indexConfig = new MapIndexConfig();
indexConfig.setOrdered(false);
indexConfig.setAttribute("name, age");

MapConfig mapConfig = new MapConfig();
mapConfig.addMapIndexConfig(indexConfig);
IndexConfig indexConfig = new IndexConfig();
indexConfig.setType(IndexType.HASH);
indexConfig.addAttribute("name");
indexConfig.addAttribute("age");

MapConfig mapConfig = new MapConfig();
mapConfig.addIndexConfig(indexConfig);

Declarative Configuration

<hazelcast>
    ...
    <map name="person">
        <indexes>
            <index ordered="false">name, age</index>
        </indexes>
    </map>
    ...
</hazelcast>
<hazelcast>
    ...
    <map name="person">
        <indexes>
            <index type="HASH">
                <attributes>
                    <attribute>name</attribute>
                    <attribute>age</attribute>
                </attributes>
            </index>
        </indexes>
    </map>
    ...
</hazelcast>

Dynamic Index Create

IMap map;

map.addIndex("name, age", false);
IMap map;

map.addIndex(new IndexConfig(IndexType.HASH, "name", "age"));

Changes in Custom Attributes

Custom attributes are referenced in predicates, queries and indexes. Some improvements have been performed in Hazelcast’s query engine and one of the results is the change in custom attribute configurations.

With this change, the following configuration parameters have been renamed:

Declarative configuration elements:

  • extractorextractor-class-name

Programmatic configuration objects and methods:

  • MapAttributeConfigAttributeConfig

  • setExtractor()setExtractorClassName()

  • addMapAttributeConfig()addAttributeConfig()

See the following table for the before/after samples.

3.12.x

5.0

Programmatic Configuration

MapAttributeConfig attributeConfig = new MapAttributeConfig();
attributeConfig.setName("currency");
attributeConfig.setExtractor("com.bank.CurrencyExtractor");

MapConfig mapConfig = new MapConfig();
mapConfig.addMapAttributeConfig(attributeConfig);
AttributeConfig attributeConfig = new AttributeConfig();
attributeConfig.setName("currency");
attributeConfig.setExtractorClassName("com.bank.CurrencyExtractor");

MapConfig mapConfig = new MapConfig();
mapConfig.addAttributeConfig(attributeConfig);

Declarative Configuration

<hazelcast>
    ...
    <map name="trades">
        <attributes>
            <attribute extractor="com.bank.CurrencyExtractor">currency</attribute>
        </attributes>
    </map>
    ...
</hazelcast>
<hazelcast>
    ...
    <map name="trades">
        <attributes>
            <attribute extractor-class-name="com.bank.CurrencyExtractor">currency</attribute>
        </attributes>
    </map>
    ...
</hazelcast>

Also, some custom query attribute classes were previously abstract classes with one abstract method. They have been converted into functional interfaces:

3.12.x

5.0

Implementing ValueExtractor

public static class PortableNameExtractor extends ValueExtractor<ValueReader, Object> {
    @Override
    public void extract(ValueReader target, Object argument, ValueCollector collector) {
        target.read("name", new ValueCallback<Object>() {
            @Override
            public void onResult(Object value) {
                collector.addObject(value);
            }
        });
    }
}
public static class PortableNameExtractor implements ValueExtractor<ValueReader, Object> {
    @Override
    public void extract(ValueReader target, Object argument, ValueCollector collector) {
        target.read("name", (ValueCallback) value -> collector.addObject(value));
    }
}

Removal of MapReduce

MapReduce API has been removed, which was deprecated. Instead, you can use the aggregations on top of Query infrastructure and Hazelcast Jet engine distributed computing platform as its successors and replacements.

See the following table for the before(MapReduce)/after(Hazelcast Jet) word count sample.

3.12.x (MapReduce)

5.0 (Jet Engine)

Word Count Sample

JobTracker tracker = hazelcastInstance.getJobTracker("default");

IMap<String, String> map = hazelcastInstance.getMap(MAP_NAME);
KeyValueSource<String, String> source = KeyValueSource.fromMap(map);

Job<String, String> job = tracker.newJob(source);
ICompletableFuture<Map<String, Integer>> future = job
           .mapper(new TokenizerMapper())
           .combiner(new WordcountCombinerFactory())
           .reducer(new WordcountReducerFactory())
           .submit();

     System.out.println(ToStringPrettyfier.toString(future.get()));
Pattern delimiter = Pattern.compile("\\W+");
Pipeline p = Pipeline.create();
p.readFrom(Sources.<String, String>map(MAP_NAME))
    .flatMap(e -> Traversers.traverseArray(delimiter.split(e.getValue().toLowerCase())))
    .filter(word -> !word.isEmpty())
    .groupingKey(Functions.wholeItem())
    .aggregate(AggregateOperations.counting())
    .writeTo(Sinks.map(COUNTS));

hazelcastInstance.getJet().newJob(p).join();

printResults(hz.getMap(COUNTS));

See the Word Count Task for a full insight.

Refactoring of Migration Listener

The MigrationListener API has been refactored. With this change, an event is published when a new migration process starts and another event when migration is completed. These events include statistics about the migration process including the start time, planned migration count, completed migration count, etc.

Additionally, a migration event is published on each replica migration, both for primary and backup replica migrations. This event includes the partition ID, replica index and migration progress statistics.

3.12.x, the following were the events listened by MigrationListener:

  • migrationStarted

  • migrationCompleted

  • migrationFailed

5.0, we have the following events instead:

  • migrationStarted

  • migrationFinished

  • replicaMigrationCompleted

  • replicaMigrationFailed

See the following table for the before/after samples.

3.12.x

5.0

Implementing a Migration Listener

import com.hazelcast.core.MigrationEvent;
import com.hazelcast.core.MigrationListener;

public class ClusterMigrationListener implements MigrationListener {
    @Override
    public void migrationStarted(MigrationEvent migrationEvent) {
        System.err.println("Started: " + migrationEvent);
    }
    @Override
    public void migrationCompleted(MigrationEvent migrationEvent) {
        System.err.println("Completed: " + migrationEvent);
    }
    @Override
    public void migrationFailed(MigrationEvent migrationEvent) {
        System.err.println("Failed: " + migrationEvent);
    }
}
import com.hazelcast.partition.MigrationListener;
import com.hazelcast.partition.MigrationState;
import com.hazelcast.partition.ReplicaMigrationEvent;

public class ClusterMigrationListener implements MigrationListener {

    @Override
    public void migrationStarted(MigrationState state) {
        System.out.println("Migration Started: " + state);
    }

    @Override
    public void migrationFinished(MigrationState state) {
        System.out.println("Migration Finished: " + state);
    }

    @Override
    public void replicaMigrationCompleted(ReplicaMigrationEvent event) {
        System.out.println("Replica Migration Completed: " + event);
    }

    @Override
    public void replicaMigrationFailed(ReplicaMigrationEvent event) {
        System.out.println("Replica Migration Failed: " + event);
    }
}

See the MigrationListener Javadoc for a full insight.

Defaulting to OpenSSL

Hazelcast defaults to use OpenSSL when:

Changes in Security Configurations

Replacing group by Simple Cluster Name Configuration

The GroupConfig class has been removed. Both the client and member configurations have the GroupConfig (or <group> in XML) replaced by a simple cluster name configuration. The password part from the GroupConfig which was already deprecated is removed now.

See the following table for the before/after sample configurations.

3.12.x

5.0

Declarative Configuration

<hazelcast>
    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
</hazelcast>
<hazelcast>
    <cluster-name>dev</cluster-name>
</hazelcast>

Programmatic Configuration

Config configProd = new Config();
configProd.getGroupConfig().setName( "production" );

Config configDev = new Config();
configDev.getGroupConfig().setName( "development" );
Config configProd = new Config();
configProd.setClusterName( "production" );

Config configDev = new Config();
configDev.setClusterName( "development" );

Member Authentication and Identity Configuration

Hazelcast IMDG 4.0 replaces the <member-credentials-factory>, <member-login-modules> and <client-login-modules> configuration by references to security realms. The security realms is a new abstraction in the security configuration of Hazelcast members. It defines the security configuration independently on the configuration part where the security is used. The component requesting security just references the security realm name.

See the following table for the before/after sample configurations.

3.12.x

5.0

<security enabled="true">
    <member-credentials-factory class-name="com.hazelcast.examples.MyCredentialsFactory">
        <properties>
            <property name="property">value</property>
        </properties>
    </member-credentials-factory>
    <member-login-modules>
        <login-module class-name="com.hazelcast.examples.MyRequiredLoginModule" usage="REQUIRED">
            <properties>
                <property name="property">value</property>
            </properties>
        </login-module>
    </member-login-modules>
    <client-login-modules>
        <login-module class-name="com.hazelcast.examples.MyRequiredLoginModule" usage="REQUIRED">
            <properties>
                <property name="property">value</property>
            </properties>
        </login-module>
    </client-login-modules>
</security>
<security enabled="true">
    <realms>
        <realm name="realm1">
            <authentication>
                <jaas>
                    <login-module class-name="com.hazelcast.examples.MyRequiredLoginModule" usage="REQUIRED">
                        <properties>
                            <property name="property">value</property>
                        </properties>
                    </login-module>
                </jaas>
            </authentication>
            <identity>
                <credentials-factory class-name="com.hazelcast.examples.MyCredentialsFactory">
                    <properties>
                        <property name="property">value</property>
                    </properties>
                </credentials-factory>
            </identity>
        </realm>
    </realms>
    <member-authentication realm="realm1"/>
    <client-authentication realm="realm1"/>
</security>

Client Identity Configuration

The <credentials> configuration is not supported anymore in the client security configuration. Existing <credentials-factory> configuration allows to fully replace the credentials as it is more flexible. There are also new <username-password> and <token> configuration elements which simplify the migration.

See the following table for the before/after sample configurations.

3.12.x

5.0

<security>
    <credentials>com.acme.security.JohnDoeCredentials</credentials>
</security>
<security>
    <username-password username="johndoe" password="s3crEt"/>
</security>

JAAS Authentication Cleanups

Introducing New Principal Types

The ClusterPrincipal class representing an authenticated user within the JAAS Subject has been replaced by three different principal types:

  • ClusterIdentityPrincipal

  • ClusterRolePrincipal

  • ClusterEndpointPrincipal

All these new principal types share the HazelcastPrincipal interface so it is simple to get or remove them all from the subject.

With this change, the Credentials object is not referenced from the principals anymore.

Also, DefaultPermissionPolicy which was consuming ClusterPrincipal and also reading the endpoint address from it works with the new ClusterRolePrincipals and ClusterEndpointPrincipals principal types.

See the following table for the before/after sample IPermissionPolicy implementations.

3.12.x

5.0

public PermissionCollection getPermissions(Subject subject, Class<? extends Permission> type) {
    PermissionCollection collection = ...;
    for (ClusterPrincipal principal : subject.getPrincipals(ClusterPrincipal.class)) {
      String endpoint = principal.getEndpoint();
      String principalName = principal.getPrincipal();
      addPermissionsToPrincipal(collection, principalName, endpoint);
    }
    return collection;
}
public PermissionCollection getPermissions(Subject subject, Class<? extends Permission> type) {
    PermissionCollection collection = ...;
    Set<ClusterEndpointPrincipal> endpointPrincipals = subject.getPrincipals(ClusterEndpointPrincipal.class);
    String endpoint = endpointIterator.hasNext() ? endpointIterator.next().getName() : null;
    for (ClusterRolePrincipal rolePrincipal : subject.getPrincipals(ClusterRolePrincipal.class)) {
        String role = rolePrincipal.getName();
        addPermissionsToPrincipal(collection, role, endpoint);
    }
    return collection;
}

Changes in ClusterLoginModule

ClusterLoginModule in Hazelcast IMDG 3.12.x contained four abstract methods to alter the behavior of LoginModule:

  • onLogin

  • onCommit

  • onAbort

  • onLogout

The login module was retrieving Credentials and using it to create the ClusterPrincipal back then.

In Hazelcast IMDG 4.0, only onLogin is abstract. Others now have empty implementations. The login module creates ClusterEndpointPrincipal automatically and adds it to the Subject.

The getName() abstract method has been added. It is used for constructing ClusterIdentityPrincipal. The addRole(String) method can be called by the child implementations to add ClusterRolePrincipals with the given name.

Also, ClusterLoginModule introduces three login module options (boolean), which allows skipping principals of a given type to the JAAS Subject. It allows, for instance, to have just one ClusterIdentityPrincipal in the Subject even if there are more login modules in the chain. These options are:

  • skipIdentity

  • skipRole

  • skipEndpoint.

See the following table for the before/after sample implementations.

3.12.x

5.0

public class TestLoginModule extends ClusterLoginModule {

    @Override
    public boolean onLogin() throws LoginException {
        UsernamePasswordCredentials usernamePasswordCredentials = (UsernamePasswordCredentials) credentials;
        if ("foo".equals(usernamePasswordCredentials.getUsername())
                && "bar".equals(usernamePasswordCredentials.getPassword())) {
            // the "foo" principal is added
            return true;
        }
        throw new FailedLoginException("Username or password doesn't match expected value.");
    }

    @Override
    public boolean onCommit() {
        return loginSucceeded;
    }

    @Override
    protected boolean onAbort() {
        return true;
    }

    @Override
    protected boolean onLogout() {
        return true;
    }
}
public class TestLoginModule extends ClusterLoginModule {

    private String name;

    @Override
    public boolean onLogin() throws LoginException {
        NameCallback ncb = new NameCallback("");
        PasswordCallback pcb = new PasswordCallback("", false);
        try {
            callbackHandler.handle(new Callback[] { ncb, pcb });
        } catch (IOException | UnsupportedCallbackException e) {
            throw new LoginException("Unable to handle credentials");
        }
        name = credentials.getName();
        if ("foo".equals(name)
                && Arrays.equals("bar".toCharArray(), pcb.getPassword())) {
            addRole("admin");
            return true;
        }
        throw new FailedLoginException("Username or password doesn't match expected value.");
    }

    @Override
    protected String getName() {
        return name;
    }
}

Changes in Credentials for Client Protocol

In Hazelcast IMDG 3.12.x, the custom credentials coming through the client protocol was always automatically deserialized. To avoid this, the Credentials interface has been redesigned in Hazelcast IMDG 4.0 to contain only the getName() (renamed from getPrincipal()) method. The endpoint handling has been moved out of the interface.

Now, Credentials has two new subinterfaces:

  • PasswordCredentials: The existing UsernamePasswordCredentials class is the default implementation.

  • TokenCredentials: The new SimpleTokenCredentials class has been introduced to implement it.

TokenCredentials is just a holder for byte array, and the authentication implementations themselves, i.e., custom LoginModules, are responsible for the data deserialization when needed.

The data from client authentication message is not deserialized by Hazelcast members anymore. For standard authentication, UsernamePasswordCredentials is constructed. For custom authentication, SimpleTokenCredentials is constructed. If the original Credentials object is not a PasswordCredentials or TokenCredentials instance, then it can be deserialized manually. However, the deserialization during authentication remains a dangerous operation and should be avoided.

See the following table for the before/after sample implementations.

3.12.x

5.0

public boolean onLogin() throws LoginException {
    if (credentials == null || !(credentials instanceof CustomCredentials)) {
        throw new FailedLoginException("No valid CustomCredentials found");
    }
    CustomCredentials custom = (CustomCredentials) credentials;
    if (!verify(custom.getJsonToken())) {
      throw new FailedLoginException("JSON token is not valid.");
    }
    return true;
}
public boolean onLogin() throws LoginException {
    CredentialsCallback cc = new CredentialsCallback();
    try {
        callbackHandler.handle(new Callback[] { cc });
    } catch (IOException | UnsupportedCallbackException e) {
        throw new FailedLoginException("Unable to retrieve credentials. " + e.getMessage());
    }
    Credentials creds = cc.getCredentials();
    if (creds == null || !(creds instanceof TokenCredentials)) {
        throw new FailedLoginException("No valid TokenCredentials found");
    }
    TokenCredentials tokenCreds = (TokenCredentials) creds;
    if (!verify(new String(tokenCreds.getToken()))) {
      throw new FailedLoginException("JSON token is not valid.");
    }
    return true;
}
Credentials serialization and deserialization in the member protocol has not been changed.

Changes in JAAS Callbacks

In Hazelcast IMDG 3.x, the CallbackHandler implementation ClusterCallbackHandler was only able to work with Hazelcast’s CredentialsCallback. In Hazelcast IMDG 4.0, it also works with the standard Java Callback implementations NameCallback and PasswordCallback.

DefaultLoginModule was using the login module options to retrieve the member’s Config object. Now, custom Callback types have been implemented which can be used to retrieve additional data required for the authentication.

List of the supported Callbacks in Hazelcast IMDG 4.0:

  • javax.security.auth.callback.NameCallback

  • javax.security.auth.callback.PasswordCallback

  • com.hazelcast.security.CredentialsCallback (provides access to the incoming Credentials instance)

  • com.hazelcast.security.EndpointCallback (allows retrieving the remote host address, it’s a replacement for Credentials.getEndpoint() in Hazelcast IMDG 3.12.x)

  • com.hazelcast.security.ConfigCallback (allows retrieving member’s Config object)

  • com.hazelcast.security.SerializationServiceCallback (provides access to Hazelcast SerializationService)

  • com.hazelcast.security.ClusterNameCallback (provides access to Hazelcast cluster name sent by the connecting party)

Renaming Quorum as Split Brain Protection

Both in the API/code samples and documentation, the term "quorum" has been replaced by "split-brain protection".

With this change, the following configuration parameters have been renamed:

Declarative configuration elements:

  • quorumsplit-brain-protection

  • quorum-sizeminimum-cluster-size

  • quorum-refsplit-brain-protection-ref

  • quorum-typeprotect-on

  • probabilistic-quorumprobabilistic-split-brain-protection

  • recently-active-quorumrecently-active-split-brain-protection

  • quorum-function-class-namesplit-brain-protection-function-class-name

  • quorum-listenerssplit-brain-protection-listeners

Programmatic configuration objects and methods:

  • QuorumConfigSplitBrainProtectionConfig

  • QuorumConfig.setSize()SplitBrainProtectionConfig.setMinimumClusterSize()

  • QuorumConfig.setType()SplitBrainProtectionConfig.setProtectOn()

  • QuorumListenerConfigSplitBrainProtectionListenerConfig

  • QuorumEventSplitBrainProtectionEvent

  • QuorumServiceSplitBrainProtectionService

  • QuorumService.getQuorum()SplitBrainProtectionService.getSplitBrainProtection()

  • isPresent()hasMinimumSize()

  • setQuorumName()setSplitBrainProtectionName()

  • addQuorumConfig()addSplitBrainProtectionConfig()

  • newProbabilisticQuorumConfigBuilder()newProbabilisticSplitBrainProtectionConfigBuilder()

  • newRecentlyActiveQuorumConfigBuilder()newRecentlyActiveSplitBrainProtectionConfigBuilder()

See the following table for a before/after sample.

3.12.x

5.0

<hazelcast>
    ...
    <quorum name="quorumRuleWithFourMembers" enabled="true">
        <quorum-size>4</quorum-size>
    </quorum>
    <map name="default">
        <quorum-ref>quorumRuleWithFourMembers</quorum-ref>
    </map>
    ...
</hazelcast>
<hazelcast>
    ...
    <split-brain-protection name="splitBrainProtectionRuleWithFourMembers" enabled="true">
        <minimum-cluster-size>4</minimum-cluster-size>
    </split-brain-protection>
    <map name="default">
        <split-brain-protection-ref>splitBrainProtectionRuleWithFourMembers</split-brain-protection-ref>
    </map>
    ...
</hazelcast>

See the Split-Brain Protection section for more information on network partitioning.

Renaming getID to getClassId in IdentifiedDataSerializable

The getId() method of the IdentifiedDataSerializable interface is a method with a common name, meaning a naming conflict would happen frequently. For example, database entities also have a getId() method. Therefore, it has been renamed as getClassId().

See the following table showing the interface code before and 5.0.

3.12.x

5.0

package com.hazelcast.nio.serialization;

public interface IdentifiedDataSerializable extends DataSerializable {

    int getFactoryId();

    int getId();
}
package com.hazelcast.nio.serialization;

public interface IdentifiedDataSerializable extends DataSerializable {

    int getFactoryId();

    int getClassId();
}

See here for more information on IdentifiedDataSerializable.

Introducing Lambda Friendly Interfaces

Entry Processor

The EntryBackupProcessor interface has been removed in favor of EntryProcessor which now defines how the entries will be processed both on the primary and the backup replicas.

Because of this, the AbstractEntryProcessor interface has been removed. This should make writing entry processors more lambda friendly.

3.12.x

5.0

        map.executeOnKey(key, new AbstractEntryProcessor<Integer, Employee>() {

            @Override
            public Object process(Map.Entry<Integer, Employee> entry) {
                Employee employee = entry.getValue();
                if (employee == null) {
                    employee = new Employee();
                }
                employee.setSalary(value);
                entry.setValue(employee);
                return null;
            }
        });
map.executeOnKey(key,
        entry -> {
            Employee employee = entry.getValue();
            if (employee == null) {
                employee = new Employee();
            }
            employee.setSalary(value);
            entry.setValue(employee);
            return null;
        });

This should cover most cases. If you need to define a custom backup entry processor, you can override the EntryProcessor#getBackupProcessor method.

map.executeOnKey(key, new EntryProcessor<Object, Object, Object>() {
    @Override
    public Object process(Entry<Object, Object> entry) {
        // process primary entry
    }

    private Object processBackupEntry(Entry<Object, Object> backupEntry) {
        // process backup entry
    }

    @Nullable
    @Override
    public EntryProcessor<Object, Object, Object> getBackupProcessor() {
        return this::processBackupEntry;
    }
});

Functional and Serializable Interfaces

Introduces interfaces with single abstract method which declares a checked exception. The interfaces are also Serializable and can be readily used when providing a lambda which is then serialized.

The Projection class was an abstract interface for historical reasons. It has been turned into a functional interface so it’s more lambda-friendly.

See the following table for the before/after sample implementations.

3.12.x

5.0

Collection<String> keys = map.project(new Projection<Entry<String, Double>, String>() {
    @Override
    public String transform(Entry<String, Double> input) {
        return input.getKey();
    }
});
Collection<String> keys = map.project(Entry::getKey);

Expanding Nullable/Nonnull Annotations

The APIs of the distributed data structures have been made cleaner by adding Nullable and Nonnull annotations, and their API documentation have been improved:

  • Now, it is obvious when looking at the API where null is allowed and where it is not.

  • Some methods were throwing NullPointerException while others were throwing IllegalArgumentException. Now the behavior is aligned and an unexpected null argument results in a NullPointerException being thrown.

  • Some methods actually allowed null but there was no indication that they did.

  • A method when used on the member would accept null and have some behavior accordingly while, on the client, the method would throw a NullPointerException. Now, the behavior of the member and client have been aligned.

The data structures and interfaces enhanced in this sense are listed below:

  • IQueue, ISet, IList

  • IMap, MultiMap, ReplicatedMap

  • Cluster

  • ITopic

  • Ringbuffer

  • ScheduledExecutor

Removal of ICompletableFuture

In Hazelcast IMDG 3.12.x series, com.hazelcast.core.ICompletableFuture was introduced to enable reactive programming style. ICompletableFuture was intended as a temporary, JDK 6 compatible replacement for java.util.concurrent.CompletableFuture that was introduced in Java 8. Since Hazelcast 4.0 requires Java 8, the user-facing asynchronous Hazelcast API methods now have their return type changed from ICompletableFuture to Java 8’s java.util.concurrent.CompletionStage.

Dependent computation stages registered using default async methods which do not accept an explicit Executor argument (such as thenAcceptAsync, whenCompleteAsync etc) are executed by the java.util.concurrent.ForkJoinPool#commonPool() (unless it does not support a parallelism level of at least two, in which case, a new Thread is created to run each task).

See the following table for the before/after samples.

3.12.x

5.0

import com.hazelcast.core.ExecutionCallback;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IMap;

public class Main {

    public static void main(String[] args) {
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
        IMap<Integer, String> map = hazelcastInstance.getMap("map");

        map.putAsync(1, "one").andThen(new ExecutionCallback<String>() {
            @Override
            public void onResponse(String response) {
                map.getAsync(1).andThen(new ExecutionCallback<String>() {
                    @Override
                    public void onResponse(String response) {
                        System.out.println("Value of 1 is " + response);
                    }

                    @Override
                    public void onFailure(Throwable t) {
                        t.printStackTrace();
                    }
                });
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        });
    }
}
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.map.IMap;

public class Main {

    public static void main(String[] args) {
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
        IMap<Integer, String> map = hazelcastInstance.getMap("map");

        map.putAsync(1, "one").whenCompleteAsync((response, throwable) -> {
            if (throwable == null) {
                map.getAsync(1).thenAcceptAsync(v -> {
                    System.out.println("Value of 1 is " + v);
                });
            } else {
                throwable.printStackTrace();
            }
        });
    }
}

WAN Replication Configuration Changes

Previously, Configuring WAN replication was problematic:

  • You needed to specify the fully qualified class name of the WAN implementation that should be used. In most cases, this was the built-in Hazelcast IMDG Enterprise Edition (EE) implementation.

  • There were various configuration options, some of which were present as Java class instance fields or XML child nodes and attributes while others were present in a properties list. The issue with the property list is that there was no checking for typos, no documentation and no IDE help.

  • If you wanted to use a custom WAN publisher SPI implementation, some configuration options did not make sense as they were tied to our implementation, e.g., WAN queue size.

  • It was verbose.

The tag which was supposed to cover both cases, using the built-in Hazelcast EE implementation and a custom WAN replication implementation (wan-publisher or WanPublisherConfig), has been separated into two configuration elements/classes to be used for built-in and custom WAN publishers:

  • batch-publisher (declarative configuration) or WanBatchPublisherConfig (programmatic configuration)

  • custom-publisher (declarative configuration) or WanCustomPublisherConfig (programmatic configuration)

This means, if you’re using the Hazelcast built-in WAN replication, the new configuration element is batch-publisher or WanBatchPublisherConfig. If you’re using a custom WAN replication implementation, the new configuration element is custom-publisher or WanCustomPublisherConfig.

Additionally, the group password has been removed from the configuration and now only the cluster name is checked when connecting to the target cluster. This has been done to align the behavior with members forming a single cluster, where members with different passwords but with the same cluster name (previously group name) could form a cluster.

See the following table for the before/after built-in WAN publisher examples:

3.12.x

5.0

Declarative Configuration

<wan-publisher group-name="builtInPublisher" publisher-id="builtInPublisherId">
    <class-name>com.hazelcast.enterprise.wan.impl.replication.WanBatchReplication</class-name>
    <queue-capacity>15000</queue-capacity>
    <queue-full-behavior>DISCARD_AFTER_MUTATION</queue-full-behavior>
    <initial-publisher-state>REPLICATING</initial-publisher-state>
    <wan-sync>
        <consistency-check-strategy>NONE</consistency-check-strategy>
    </wan-sync>
    <properties>
        <property name="endpoints">10.3.5.1:5701,10.3.5.2:5701</property>
        <property name="batch.size">1000</property>
        <property name="batch.max.delay.millis">2000</property>
        <property name="response.timeout.millis">60000</property>
        <property name="ack.type">ACK_ON_OPERATION_COMPLETE</property>
        <property name="snapshot.enabled">false</property>
        <property name="group.password">nyc-pass</property>
    </properties>
</wan-publisher>
<batch-publisher>
    <cluster-name>builtInPublisher</cluster-name>
    <publisher-id>builtInPublisherId</publisher-id>
    <batch-size>1000</batch-size>
    <batch-max-delay-millis>2000</batch-max-delay-millis>
    <response-timeout-millis>60000</response-timeout-millis>
    <acknowledge-type>ACK_ON_OPERATION_COMPLETE</acknowledge-type>
    <initial-publisher-state>REPLICATING</initial-publisher-state>
    <snapshot-enabled>false</snapshot-enabled>
    <queue-full-behavior>DISCARD_AFTER_MUTATION</queue-full-behavior>
    <queue-capacity>10000</queue-capacity>
    <target-endpoints>10.3.5.1:5701,10.3.5.2:5701</target-endpoints>
    <sync>
        <consistency-check-strategy>NONE</consistency-check-strategy>
    </sync>
</batch-publisher>

Programmatic Configuration

WanPublisherConfig publisherConfig = new WanPublisherConfig()
        .setGroupName("builtInPublisher")
        .setPublisherId("builtInPublisherId")
        .setClassName("com.hazelcast.enterprise.wan.impl.replication.WanBatchReplication")
        .setQueueCapacity(15000)
        .setQueueFullBehavior(WANQueueFullBehavior.DISCARD_AFTER_MUTATION)
        .setInitialPublisherState(WanPublisherState.REPLICATING);
publisherConfig.getWanSyncConfig().setConsistencyCheckStrategy(ConsistencyCheckStrategy.NONE);
Map<String, Comparable> properties = publisherConfig.getProperties();
properties.put("endpoints", "10.3.5.1:5701,10.3.5.2:5701");
properties.put("batch.size", 1000);
properties.put("batch.max.delay.millis", 2000);
properties.put("response.timeout.millis", 60000);
properties.put("ack.type", WanAcknowledgeType.ACK_ON_OPERATION_COMPLETE.toString());
properties.put("snapshot.enabled", false);
properties.put("group.password", "nyc-pass");
WanBatchPublisherConfig publisherConfig = new WanBatchPublisherConfig()
        .setClusterName("builtInPublisher")
        .setPublisherId("builtInPublisherId")
        .setClassName("com.hazelcast.enterprise.wan.impl.replication.WanBatchReplication")
        .setQueueCapacity(15000)
        .setQueueFullBehavior(WanQueueFullBehavior.DISCARD_AFTER_MUTATION)
        .setInitialPublisherState(WanPublisherState.REPLICATING)
        .setTargetEndpoints("10.3.5.1:5701,10.3.5.2:5701")
        .setBatchSize(1000)
        .setBatchMaxDelayMillis(2000)
        .setResponseTimeoutMillis(60000)
        .setAcknowledgeType(WanAcknowledgeType.ACK_ON_OPERATION_COMPLETE)
        .setSnapshotEnabled(false);
publisherConfig.getWanSyncConfig().setConsistencyCheckStrategy(ConsistencyCheckStrategy.NONE);

See the following table for the before/after custom WAN publisher examples:

3.12.x

5.0

Declarative Configuration

<wan-publisher group-name="customWanPublisherId">
    <class-name>com.myCompany.MyImplementation</class-name>
    <properties>
        <property name="some.property">some-value</property>
        <property name="some.other.property">some-other-value</property>
    </properties>
</wan-publisher>
<custom-publisher>
    <publisher-id>customPublisherId</publisher-id>
    <class-name>com.myCompany.MyImplementation</class-name>
    <properties>
        <property name="some.property">some-value</property>
        <property name="some.other.property">some-other-value</property>
    </properties>
</custom-publisher>

Programmatic Configuration

WanPublisherConfig publisherConfig = new WanPublisherConfig()
        .setGroupName("customWanPublisherId")
        .setClassName("com.myCompany.MyImplementation");
Map<String, Comparable> properties = publisherConfig.getProperties();
properties.put("some.property", "some-value");
properties.put("some.other.property", "some-other-value");
WanCustomPublisherConfig publisherConfig = new WanCustomPublisherConfig()
        .setPublisherId("customWanPublisherId")
        .setClassName("com.myCompany.MyImplementation");
Map<String, Comparable> properties = publisherConfig.getProperties();
properties.put("some.property", "some-value");
properties.put("some.other.property", "some-other-value");

See the here for more information on WAN Replication.

WAN Replication SPI Changes

In IMDG 3.12.x series, the WAN publisher SPI allowed you to plug into the lifecycle of a map/cache entry and replicate the updates to another system. For example, you might implement replication to Kafka or some JMS queue or even write out map and cache event changes to a log on disk. The SPI was not very intuitive though:

  • It was not clear which interface needed to be implemented (WanPublisher vs. WanReplicationEndpoint).

  • You had to implement different interfaces, depending on whether you were using Hazelcast IMDG Open Source or Enterprise edition.

  • There were cases of leaking internals which don’t make sense for some custom implementations.

  • There were unused methods in the public SPI.

We have provided a new and cleaner WAN publisher SPI after 3.12.x. You only need to implement a single interface: com.hazelcast.wan.WanPublisher. This implementation can then be set in the WAN replication configuration and be used with both Hazelcast Open Source and Enterprise editions.

Predicate API Cleanups

The following refactors and cleanups have been performed on the public Predicate related API:

  • Moved the following classes from the com.hazelcast.query package to com.hazelcast.query.impl.predicates:

    • IndexAwarePredicate

    • VisitablePredicate

    • SqlPredicate/Parser

    • TruePredicate

  • Moved the FalsePredicate and SkipIndexPredicate classes to the com.hazelcast.query.impl.predicates package.

  • Converted PagingPredicate and PartitionPredicate to interfaces and added PagingPredicateImpl and PartitionPredicateImpl to the com.hazelcast.query.impl.predicate package.

  • Converted PredicateBuilder and EntryObject to interfaces (and made EntryObject a nested interface in PredicateBuilder) and added PredicateBuilderImpl to the com.hazelcast.query.impl.predicates package.

  • The public API classes/interfaces no longer extend IndexAwarePredicate/ VisitablePredicate; this dependency has been moved to the impl classes.

  • Introduced the new factory methods in Predicates:

    • newPredicateBuilder()

    • sql()

    • pagingPredicate()

    • partitionPredicate()

Consequently, the public Predicate API now provides only interfaces (Predicate, PagingPredicate and PartitionPredicate) with no dependencies on any internal APIs.

See the Predicates API section for more information on predicates.

Changing the UUID String Type to UUID

Some public APIs that return UUID strings have been changed to return UUID. These changes include getUuid() method of the Endpoint interface, getTxnId() method of the TransactionContext interface, return values of the listener registrations and registrationId parameters for the methods that de-register the listeners.

See the following table for the before/after sample implementations.

3.12.x

5.0

        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
        String registrationId = hazelcastInstance.getClientService().addClientListener(new ClientListener() {
            @Override
            public void clientConnected(Client client) {
                String clientUuid = client.getUuid();
                System.out.println("Client connected >>> " + clientUuid);
            }

            @Override
            public void clientDisconnected(Client client) {
                String clientUuid = client.getUuid();
                System.out.println("Client disconnected >>> " + clientUuid);
            }
        });
        hazelcastInstance.getClientService().removeClientListener(registrationId);
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
        UUID registrationId = hazelcastInstance.getClientService().addClientListener(new ClientListener() {
            @Override
            public void clientConnected(Client client) {
                UUID clientUuid = client.getUuid();
                System.out.println("Client connected >>> " + clientUuid);
            }

            @Override
            public void clientDisconnected(Client client) {
                UUID clientUuid = client.getUuid();
                System.out.println("Client disconnected >>> " + clientUuid);
            }
        });
        hazelcastInstance.getClientService().removeClientListener(registrationId);

Removal of Deprecated Concurrency API Implementations

After introduction of CP Subsystem in Hazelcast IMDG 3.12, legacy implementations of the distributed concurrency APIs, e.g., ILock and IAtomicLong, had been deprecated. In IMDG 4.0 and afterwards, these deprecated implementations and additionally ILock and ICondition interfaces are completely removed.

After Hazelcast IMDG 3.12, CP Subsystem has received an unsafe operation mode which provides weaker consistency guarantees similar to former implementations in Hazelcast IMDG 3.x series.

For more information, see the CP Subsystem section.

See the following table for the before/after samples.

3.12.x

5.0

import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IAtomicLong;
import com.hazelcast.core.IAtomicReference;
import com.hazelcast.core.ICountDownLatch;
import com.hazelcast.core.ILock;
import com.hazelcast.core.ISemaphore;

public class Main {

    public static void main(String[] args) {
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();

        IAtomicLong atomiclong = hazelcastInstance.getAtomicLong("atomiclong");
        atomiclong.incrementAndGet();

        IAtomicReference<String> atomicref = hazelcastInstance.getAtomicReference("atomicref");
        atomicref.set("value");

        ILock lock = hazelcastInstance.getLock("lock");
        lock.tryLock();

        ISemaphore semaphore = hazelcastInstance.getSemaphore("semaphore");
        semaphore.tryAcquire();

        ICountDownLatch latch = hazelcastInstance.getCountDownLatch("latch");
        latch.countDown();
    }
}
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.cp.CPSubsystem;
import com.hazelcast.cp.IAtomicLong;
import com.hazelcast.cp.IAtomicReference;
import com.hazelcast.cp.ICountDownLatch;
import com.hazelcast.cp.ISemaphore;
import com.hazelcast.cp.lock.FencedLock;

public class Main {

    public static void main(String[] args) {
        HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
        CPSubsystem cpSubsystem = hazelcastInstance.getCPSubsystem();

        IAtomicLong atomiclong = cpSubsystem.getAtomicLong("atomiclong");
        atomiclong.incrementAndGet();

        IAtomicReference<String> atomicref = cpSubsystem.getAtomicReference("atomicref");
        atomicref.set("value");

        FencedLock lock = cpSubsystem.getLock("lock");
        lock.tryLock();

        ISemaphore semaphore = cpSubsystem.getSemaphore("semaphore");
        semaphore.tryAcquire();

        ICountDownLatch latch = cpSubsystem.getCountDownLatch("latch");
        latch.countDown();
    }
}

Removal of Legacy Merge Policies

All legacy merge policies have been removed. Replacements of legacies are under the com.hazelcast.spi.merge package.

These are the replacements for IMap and ICache:

Removed IMap Merge Policies and Their Replacements

  • com.hazelcast.map.merge.HigherHitsMapMergePolicycom.hazelcast.spi.merge.HigherHitsMergePolicy

  • com.hazelcast.map.merge.LatestUpdateMapMergePolicycom.hazelcast.spi.merge.LatestUpdateMergePolicy

  • com.hazelcast.map.merge.PassThroughMergePolicycom.hazelcast.spi.merge.PassThroughMergePolicy

  • com.hazelcast.map.merge.PutIfAbsentMapMergePolicycom.hazelcast.spi.merge.PutIfAbsentMergePolicy

Removed ICache Merge Policies and Their Replacements

  • com.hazelcast.cache.merge.HigherHitsCacheMergePolicycom.hazelcast.spi.merge.HigherHitsMergePolicy

  • com.hazelcast.cache.merge.LatestAccessCacheMergePolicycom.hazelcast.spi.merge.LatestAccessMergePolicy

  • com.hazelcast.cache.merge.PassThroughCacheMergePolicycom.hazelcast.spi.merge.PassThroughMergePolicy

  • com.hazelcast.cache.merge.PutIfAbsentCacheMergePolicycom.hazelcast.spi.merge.PutIfAbsentMergePolicy

Moreover, the setMergePolicy/getMergePolicy methods have been removed from MapConfig, ReplicatedMapConfig and CacheConfig. They have been replaced by the setMergePolicyConfig/getMergePolicyConfig methods.

The merge-policy declarative configuration element that has been used in the older IMDG versions still can be used:

<merge-policy batch-size="100">LatestAccessMergePolicy</merge-policy>

See here for more information on configuring merge policies.

Changes in AWS Configuration

AWS programmatic configuration has been merged with a more universal configuration infrastructure common to all cloud providers. The declarative configuration remains unchanged. See here for more information on configuring Hazelcast on AWS.

See the following table for the before/after samples.

3.12.x

5.0

AwsConfig config = new AwsConfig();
config.setSecretKey("my-secret-key") ;
config.setRegion("my-region");
config.setSecurityGroupName("my-security-group");
config.setTagKey("my-tag-key");
config.setTagValue("my-tag-value");
...
config.setEnabled(true);
AwsConfig config = new AwsConfig();
config.setProperty("secret-key", "my-secret-key") ;
config.setProperty("region", "my-region");
config.setProperty("security-group-name", "my-security-group-name");
config.setProperty("tag-key", "my-tag-key");
config.setProperty("tag-value", "my-tag-value");
...
config.setEnabled(true);

Removal of Deprecated System Properties

The following deprecated cluster properties were removed:

  • hazelcast.rest.enabled

  • hazelcast.memcache.enabled

  • hazelcast.http.healthcheck.enabled

Please see Using the REST Endpoint Groups on how to configure Hazelcast instance to expose REST endpoints. Please see Health Check and Monitoring on how to enable the health check. Please see Memcache Client on how to enable memcache client request listener service.

Removal of Deprecations in LoginModuleConfig

The following deprecated methods have been removed:

  • getImplementation(), replaced by getClassName().

  • setImplementation(Object), replaced by setClassName(String).

In declarative configuration class-name property should be used instead.

Removal of Deprecations in MultiMapConfig

The following deprecated methods have been removed:

  • getSyncBackupCount(), replaced by getBackupCount().

  • setSyncBackupCount(int), replaced by setBackupCount(int).

In declarative configuration backup-count property should be used instead.

See here for more information on configuring MultiMap.

Removal of Deprecations in PartitioningStrategyConfig

Misspelled setPartitionStrategy(PartitioningStrategy) has been removed, setPartitioningStrategy(PartitioningStrategy) should be used instead.

See here for more information on configuring MultiMap.

Removal of Deprecations in ServiceConfig

The following deprecated methods have been removed:

  • getServiceImpl(), replaced by getImplementation().

  • setServiceImpl(Object), replaced by setImplementation(Object).

See the here for ServiceConfigs Javadoc.

Removal of Deprecations in TransactionContext

Deprecated getXaResource() method has been removed. HazelcastInstance.getXAResource() should be used instead.

See the here for HazelcastInstances Javadoc.

Removal of Deprecations in DistributedObjectEvent

Deprecated getObjectId() method has been removed, getObjectName() should be used instead.

See the here for DistributedObjectEventss Javadoc.

Removal of Deprecated EntryListener-based Listener API in IMap

The following set of deprecated EntryListener-based listener API methods has been removed:

  • addLocalEntryListener(EntryListener<K, V>)

  • addLocalEntryListener(EntryListener<K, V>, Predicate<K, V>, boolean)

  • addLocalEntryListener(EntryListener<K, V>, Predicate<K, V>, K, boolean)

  • addEntryListener(EntryListener<K, V>, boolean)

  • addEntryListener(EntryListener<K, V>, K, boolean)

The following MapListener-based methods should be used as replacements:

  • addLocalEntryListener(MapListener)

  • addLocalEntryListener(MapListener, Predicate<K,V>, boolean)

  • addLocalEntryListener(MapListener, Predicate<K,V>, K, boolean)

  • addEntryListener(MapListener, boolean)

  • addEntryListener(MapListener, K, boolean)

EntryListener-based listeners are still supported by the newer MapListener-based API and declarative configuration.

Changes in MapLoader

When trying to load map entries from a store with your MapLoader implementation (loadAll()), Hazelcast fails to do so if any of the keys or values are null for IMDG 4.x and Platform 5.x releases.

Changes in IMap Eviction Configuration

There has been a simplification and improvement in the way of configuring the eviction for a map.

See the following table for the before/after samples.

3.12.x

5.0

<hazelcast>
    ...
    <map name="default">
        <eviction-policy>LRU</eviction-policy>
        <max-size policy="PER_NODE">20</max-size>
    </map>
    ...
</hazelcast>
<hazelcast>
    ...
    <map name="default">
        <eviction eviction-policy="LRU" max-size-policy="PER_NODE" size="20"/>
    </map>
    ...
</hazelcast>

Changes in IMap Custom Eviction Policy Configuration

There has been a simplification and improvement in the way of configuring the custom eviction policy for a map.

See the following table for the before/after samples.

3.12.x

5.0

<hazelcast>
    ...
    <map name="default">
        <map-eviction-policy-class-name>
            com.mycompany.MyMapEvictionPolicyComparator
        </map-eviction-policy-class-name>
    </map>
    ...
</hazelcast>
<hazelcast>
    ...
    <map name="default">
        <eviction comparator-class-name="com.mycompany.MyMapEvictionPolicyComparator"/>
    </map>
    ...
</hazelcast>

Changes in EntryListenerConfig

Return type of the EntryListenerConfig.getImplementation() method has been changed from EntryListener to MapListener.

See the following table for the before/after snippets.

3.12.x

5.0

EntryListenerConfig config = new EntryListenerConfig();
EntryListener listenerImpl = config.getImplementation();
EntryListenerConfig config = new EntryListenerConfig();
MapListener listenerImpl = config.getImplementation();

Changes in REST Endpoints

The following REST endpoints have been changed:

  • /hazelcast/rest/mancenter/changeurl is removed

  • All /hazelcast/rest/mancenter/wan/* endpoints are renamed to /hazelcast/rest/wan/

The following REST endpoints now require cluster name and password as the first two URL-encoded parameters:

  • /hazelcast/rest/wan/sync/map

  • /hazelcast/rest/wan/sync/allmaps

  • /hazelcast/rest/wan/clearWanQueues

  • /hazelcast/rest/wan/addWanConfig

  • /hazelcast/rest/wan/pausePublisher

  • /hazelcast/rest/wan/stopPublisher

  • /hazelcast/rest/wan/resumePublisher

  • /hazelcast/rest/wan/consistencyCheck/map

The output of the following endpoints has been changed to JSON:

  • /hazelcast/health/node-state

  • /hazelcast/health/cluster-state

  • /hazelcast/health/cluster-safe

  • /hazelcast/health/migration-queue-size

  • /hazelcast/health/cluster-size

  • /hazelcast/health/ready

  • /hazelcast/rest/cluster

Changes in the Diagnostics Configuration

By introducing the metrics system in Hazelcast IMDG 4.0 and afterwards, the metrics collected by Diagnostics and the metrics system is shared. This has come with the following changes of the system properties that configure diagnostics:

  • hazelcast.diagnostics.metric.level is not available anymore. Collecting debug metrics can be enabled by setting the hazelcast.metrics.debug.enabled or hazelcast.client.metrics.debug.enabled system properties to true for the members and clients respectively.

  • hazelcast.diagnostics.metric.distributed.datastructures is not anymore available since the data structure metrics are required for the other metric consumers. Therefore, they are collected by default and no need for enabling it for the diagnostics.

Changes in the Management Center Configuration

As Management Center now uses Hazelcast Java client for communication with the cluster, all attributes and nested elements have been removed from programmatic, XML and YAML configurations for Management Center, i.e., from ManagementCenterConfig class and management-center configuration element, except for the scripting-enabled attribute.

The default value of scripting-enabled attribute is false, whereas in Hazelcast 3.x it was enabled by default for Hazelcast Open Source.

A full example of settings available in the Management Center configuration now looks like the following:

<management-center scripting-enabled="true" />

This has come with the following changes of the system properties that configure Management Center:

  • hazelcast.mc.url.change.enabled is not available anymore.

Changes in the Event Journal Configuration

Event journal configuration had been put as a top-level configuration element. With IMDG 4.0 and afterwards, this restriction has been removed; this means event journal configuration now can be part of both map and cache configurations. This eliminates additionally specifying the map /cache names on the event journal configuration to connect it to the map/cache configurations.

See the following table for the before/after snippets.

3.12.x

5.0

<hazelcast>
    ...
    <event-journal enabled="false">
        <mapName>default</mapName>
        <capacity>10000</capacity>
        <time-to-live-seconds>0</time-to-live-seconds>
    </event-journal>
    ...
    <event-journal enabled="false">
        <cacheName>default</cacheName>
        <capacity>10000</capacity>
        <time-to-live-seconds>0</time-to-live-seconds>
    </event-journal>
    ...
</hazelcast>
<hazelcast>
    ...
    <map name="default">
        <event-journal enabled="false">
            <capacity>10000</capacity>
            <time-to-live-seconds>0</time-to-live-seconds>
        </event-journal>
    </map>
    ...
    <cache name="*">
        <event-journal enabled="false">
            <capacity>10000</capacity>
            <time-to-live-seconds>0</time-to-live-seconds>
        </event-journal>
    </cache>
    ...
</hazelcast>