Published Solutions
-
CloudBlue SaaS Ticket Severity
Severity level indicates the relative impact of an issue on our customer’s systems or business processes. CloudBlue Support uses the following severity level definitions to classify all Cloudblue SaaS support requests: Severity Definition Severity Level 1 (Urgent) A ticket with Urgent severity must meet at least one of the following: The production platform with a business-critical service is down, inoperable, inaccessible or unavailable. A business-critical service is not working after a change, backup, recovery or migration. The issue significantly affects a large group (more than 33%) of users. As soon as a workaround is available the ticket will be closed and a follow-up ticket will be created to further investigate to identify the root cause of an issue and to prevent such incidents in the future. Severity Level 2 (High) A ticket with high severity must meet at least one of the following: An entire component or functionality of the production platform does not work and/or cannot be used. Significant performance degradation that causes high impact on business operations for a significant number of processed transactions. A production platform has a big issue after a change and rollback has had no effect and the issue is still ongoing. The incident significantly affects a large group (more than 33%) of users. As soon as a workaround is available the ticket will be closed and a follow-up ticket will be created to further investigate to identify the root cause of an issue and to prevent such incidents in the future. Severity Level 3 (Medium) A ticket with Medium severity meets at least one of the following: affects non-essential functions or has minimal impact to business operations, the problem is localized or has isolated impact, the problem is an operational nuisance or results from documentation errors, the problem is not an Urgent or High severity, but is otherwise a failure of a pre-existing situation. Severity Level 4 (Low) A ticket with Low severity must meet at least one of the following: a follow-up ticket for an Urgent or High ticket that requires further investigation to identify the root cause of an issue and to prevent such issues in the future, a Request for new configuration, deployment or functionality. NOTE: To address tickets of severity Urgent and High within the defined service level, you MUST call our 8x5 or 24x7 phone number indicated during the Onboarding process and inform us about the case. A contact person of the Customer must be reachable by phone to answer questions and consult with our support team. If a contact person is not reachable, the ticket severity may be lowered to Medium.
-
Upgrading PostgreSQL from version 11 to 13 using the pg_upgrade tool
Before the upgrade Upgrade procedure Disaster recovery Before the upgrade This article explains, how to upgrade PostgreSQL from 11 to 13 using the pg_upgrade tool. Before the upgrade, the BSS database will be checked for tables with OID columns. As these columns are not supported starting PostgreSQL version 12, such tables, if found, must be updated as explained below. A pre-check detecting tables with OID columns is performed automatically before the upgrade, however, it is possible to run it manually. To do this, complete the following steps: Connect to the BSS database. Run the following query: select c.relname , c.reltuples , pg_size_pretty(c.relpages::bigint * 8192) table_size from pg_class c join pg_namespace ns on (ns.oid = c.relnamespace) where 1=1 and c.relkind = 'r' and ns.nspname = 'public' and exists (select 1 from pg_attribute a where a.attrelid = c.oid and a.attnum SET WITHOUT OIDS; Note: The ALTER TABLE statement locks the table exclusively and re-writes the whole table. This can take significant time and consume much disk space for large tables. Upgrade procedure The upgrade procedure consists of two stages, as the replica is not upgraded along with the master database: Master database upgrade Replica upgrade Master database upgrade To upgrade PostgreSQL from version 11 to 13 for ClodBlue, complete the following steps. Login to a workstation with access to the Kubernetes cluster, where CloudBlue is deployed, and stop all applications that use the PostgreSQL database, including OSS, BSS, and other microservices. Depending on the infrastructure, the commands to do that can be different. These are example commands: kubectl scale statefulset --replicas=0 oss-node kubectl scale deploy --all --replicas=0 -n Note: The subsequent commands must be executed on BSS and OSS database hosts. Install PostgreSQL 13. Stop the puppet service using the following command: systemctl stop puppet Download the postgresql-13 packages for the latest minor release (in our example, for the 13.7 version released on May 2022) and install them: postgresql13-13.7-1PGDG.rhel7.x86_64.rpm postgresql13-contrib-13.7-1PGDG.rhel7.x86_64.rpm postgresql13-libs-13.7-1PGDG.rhel7.x86_64.rpm postgresql13-server-13.7-1PGDG.rhel7.x86_64.rpm Download the packages from the https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-7-x86_64/ repository to the current directory and install them using the following command as an example: yum localinstall ./postgresql13-13.7-1PGDG.rhel7.x86_64.rpm ./postgresql13-contrib-13.7-1PGDG.rhel7.x86_64.rpm ./postgresql13-libs-13.7-1PGDG.rhel7.x86_64.rpm ./postgresql13-server-13.7-1PGDG.rhel7.x86_64.rpm Initialize a new PostgreSQL cluster using this command as an example: su - postgres -c '/usr/pgsql-13/bin/initdb --locale=en_US.UTF-8 --pgdata=/var/lib/pgsql/13/data' Ensure that free disk space available in the /var/lib/pgsql/13/data directory is at least 1.2 (20 percent) greater than the disk space used in the /var/lib/pgsql/11/data directory. This is necessary for a successful upgrade. Below are example commands that can be used to find out the disk space used and available: Sample command: du -hs /var/lib/pgsql/11/data Sample output: 799M /var/lib/pgsql/11/data Sample command: df -h /var/lib/pgsql/13/data Sample output: Filesystem Size Used Avail Use% Mounted on /dev/sda2 30G 3,6G 26G 13% / Back up the data. Create a directory to store the dump. Below are example commands: mkdir /opt/dumps/host chown -R postgres:postgres /opt/dumps/host Check the directory. Below is an example command df -h /opt/dumps/host Export the data. Below is an example command export cpu_cnt=`grep -c "processor" /proc/cpuinfo` && jobs=$(( $cpu_cnt * 3 / 2 )); echo "jobs=$jobs" && echo "Start at "`date` && dump_path="/opt/dumps/host" && pg_13_bin="/usr/pgsql-13/bin" && cd ${dump_path} && for DB in $(psql -U postgres -d postgres -t -c "select datname from pg_database where not datname ~ 'template|oids|test|pass'"); do echo "${dump_path}/${DB} `date`"; su postgres -c "mkdir ${dump_path}/${DB}"; su postgres -c "${pg_13_bin}/pg_dump -d ${DB} -Z 1 -j ${jobs} -F d -v -C -f ${dump_path}/${DB}"; done && su postgres -c "${pg_13_bin}/pg_dumpall -r -U postgres -f ${dump_path}/users_n_roles.sql" && echo "Finish at"`date` Upgrade the data using the pg_upgrade tool. Check a cast for OID columns exists. Below is an example query that you can use: psql -U postgres -d postgres -t -c "select p.proname from pg_cast c join pg_proc p on (p.oid = c.castfunc) where p.proname = 'varchar2uuid'" If the result is varchar2uuid, the cast exists and the data can be created in the PostgreSQL 13 database. There are views in the pg-exporter-bss schema of the BSS DB and the pg-exporter-oss of the OSS DB that cannot be migrated to PostgreSQL 13 as is. These views must be removed and re-created after the migration. To do this, complete the following steps: To remove the views, use the following commands: psql -U postgres -d pba -t -c "drop view if exists "pg-exporter-bss".pg_stat_activity" psql -U postgres -d oss -t -c "drop view if exists "pg-exporter-oss".pg_stat_activity" Save the following script for creating the view in the BSS database to the file /var/lib/pgsql/create_view_bss.sql: --PBA do $$ begin begin create or replace view "pg-exporter-bss".pg_stat_activity (datid, datname, pid, usesysid, usename, application_name, client_addr, client_hostname, client_port, backend_start, xact_start, query_start, state_change, wait_event_type, wait_event, state, backend_xid, backend_xmin, query, backend_type) as SELECT get_pg_stat_activity.datid, get_pg_stat_activity.datname, get_pg_stat_activity.pid, get_pg_stat_activity.usesysid, get_pg_stat_activity.usename, get_pg_stat_activity.application_name, get_pg_stat_activity.client_addr, get_pg_stat_activity.client_hostname, get_pg_stat_activity.client_port, get_pg_stat_activity.backend_start, get_pg_stat_activity.xact_start, get_pg_stat_activity.query_start, get_pg_stat_activity.state_change, get_pg_stat_activity.wait_event_type, get_pg_stat_activity.wait_event, get_pg_stat_activity.state, get_pg_stat_activity.backend_xid, get_pg_stat_activity.backend_xmin, get_pg_stat_activity.query, get_pg_stat_activity.backend_type FROM get_pg_stat_activity() get_pg_stat_activity; alter view "pg-exporter-bss".pg_stat_activity owner to pba; begin grant select on "pg-exporter-bss".pg_stat_activity to "pg-exporter-bss"; exception when undefined_object then raise warning 'no "pg-exporter-bss" role exists!'; end; begin grant select on "pg-exporter-bss".pg_stat_activity to readonlyall; exception when undefined_object then raise notice 'no readonlyall role - nothing additional is granted'; end; exception when invalid_schema_name then raise notice 'no "pg-exporter-bss" exists, no view is created'; end; end; $$; Save the following script for creating the view in the OSS database to the file /var/lib/pgsql/create_view_oss.sql: do $$ begin begin create or replace view "pg-exporter-oss".pg_stat_activity (datid, datname, pid, usesysid, usename, application_name, client_addr, client_hostname, client_port, backend_start, xact_start, query_start, state_change, wait_event_type, wait_event, state, backend_xid, backend_xmin, query, backend_type) as SELECT get_pg_stat_activity.datid, get_pg_stat_activity.datname, get_pg_stat_activity.pid, get_pg_stat_activity.usesysid, get_pg_stat_activity.usename, get_pg_stat_activity.application_name, get_pg_stat_activity.client_addr, get_pg_stat_activity.client_hostname, get_pg_stat_activity.client_port, get_pg_stat_activity.backend_start, get_pg_stat_activity.xact_start, get_pg_stat_activity.query_start, get_pg_stat_activity.state_change, get_pg_stat_activity.wait_event_type, get_pg_stat_activity.wait_event, get_pg_stat_activity.state, get_pg_stat_activity.backend_xid, get_pg_stat_activity.backend_xmin, get_pg_stat_activity.query, get_pg_stat_activity.backend_type FROM get_pg_stat_activity() get_pg_stat_activity; alter view "pg-exporter-oss".pg_stat_activity owner to oss; begin grant select on "pg-exporter-oss".pg_stat_activity to "pg-exporter-oss"; exception when undefined_object then raise warning 'no "pg-exporter-oss" role exists!'; end; begin grant select on "pg-exporter-oss".pg_stat_activity to readonlyall; exception when undefined_object then raise notice 'no readonlyall role - nothing additional is granted'; end; exception when invalid_schema_name then raise notice 'no "pg-exporter-oss" exists, no view is created'; end; end; $$; Stop PostgreSQL 11 service. You can use the following command: systemctl stop postgresql-11 Check that there are no postgres processes running on the database host: ps -ef|grep "postgres"|grep -v "grep" Warning! If any process was found, you must identify, disconnect, and stop found application(s). After that, repeat this step to stop PostgreSQL 11 service. Temporary enable local access to all databases in cluster only for postgres and back up PostgreSQL configuration: pg_hba.conf. Use the following commands: cp /var/lib/pgsql/11/data/pg_hba.conf /root/pg_hba.conf_bak_`date '+%Y-%m-%d'` su - postgres -c 'cp /var/lib/pgsql/11/data/pg_hba.conf /var/lib/pgsql/11/data/pg_hba.conf.prod' su - postgres -c 'echo "local all postgres trust" >/var/lib/pgsql/11/data/pg_hba.conf' Copy configuration files from PostgreSQL 11 to PostgreSQL 13 server: su - postgres -c 'cp /var/lib/pgsql/11/data/pg_hba.conf* /var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/pg_ident.conf /var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/postgresql.conf /var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/server.* /var/lib/pgsql/13/data' If required, enable PostgreSQL 13 features: If the database host has more than 8 CPUs, then the max_worker_processes must be set to the number of CPUs minus 1. The default value is 8. Only if the database host has an SSD disk, set effective_io_concurrency to 100. This is applicable only to SSD disks. The following parameters are added in PostgreSQL 13: autovacuum_vacuum_insert_scale_factor=0.2 autovacuum_vacuum_insert_threshold=1000 hash_mem_multiplier=1 Run pg_upgrade in check mode. su - postgres -c '/usr/pgsql-13/bin/pg_upgrade --jobs=8 --new-port=8352 --old-port=8352 --check --old-datadir "/var/lib/pgsql/11/data" --new-datadir "/var/lib/pgsql/13/data" --old-bindir "/usr/pgsql-11/bin" --new-bindir "/usr/pgsql-13/bin"' All results must be positive like in the example below: #su - postgres -c '/usr/pgsql-13/bin/pg_upgrade --jobs=8 --new-port=8352 --old-port=8352 --check --old-datadir "/var/lib/pgsql/11/data" --new-datadir "/var/lib/pgsql/13/data" --old-bindir "/usr/pgsql-11/bin" --new-bindir "/usr/pgsql-13/bin"' Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for system-defined composite types in user tables ok Checking for reg* data types in user tables ok Checking for contrib/isn with bigint-passing mismatch ok Checking for tables WITH OIDS ok Checking for invalid "sql_identifier" user columns ok Checking for presence of required libraries ok Checking database user is the install user ok Checking for prepared transactions ok Checking for new cluster tablespace directories ok *Clusters are compatible* If there are OID columns in the billing database, the response will be similar to the one below and you will need to update the tables to remove OIDs. # su - postgres -c '/usr/pgsql-13/bin/pg_upgrade --jobs=8 --new-port=8352 --old-port=8352 --check --old-datadir "/var/lib/pgsql/11/data" --new-datadir "/var/lib/pgsql/13/data" --old-bindir "/usr/pgsql-11/bin" --new-bindir "/usr/pgsql-13/bin"' Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for system-defined composite types in user tables ok Checking for reg* data types in user tables ok Checking for contrib/isn with bigint-passing mismatch ok Checking for tables WITH OIDS fatal Your installation contains tables declared WITH OIDS, which is not supported anymore. Consider removing the oid column using ALTER TABLE ... SET WITHOUT OIDS; A list of tables with the problem is in the file: tables_with_oids.txt Failure, exiting Skip this step if no columns with OIDs were found in the billing database. Otherwise, complete the steps below to make the necessary corrections. Prepare the following script to be executed in the PostgreSQL 11 database. Complete the following commands:s sed 's/In database:/\\c/g' /var/lib/pgsql/tables_with_oids.txt | awk -F '.' '{if ($0 ~ '/\c\ /') {print $0} else {print "alter table "$1".\""$2"\" set without oids;"}}' > /var/lib/pgsql/alter_tables_without_oids.sql chown postgres /var/lib/pgsql/alter_tables_without_oids.sql Run the following commands to apply the fix: systemctl start postgresql-11 su - postgres -c 'psql -U postgres -d pba -f /var/lib/pgsql/alter_tables_without_oids.sql' systemctl stop postgresql-11 Check if there are still columns with OIDs by completing step 5.g again. Execute the upgrade using the following command: su - postgres -c '/usr/pgsql-13/bin/pg_upgrade --jobs=8 --new-port=8352 --old-port=8352 --old-datadir "/var/lib/pgsql/11/data" --new-datadir "/var/lib/pgsql/13/data" --old-bindir "/usr/pgsql-11/bin" --new-bindir "/usr/pgsql-13/bin"' Restore the PostgreSQL configuration by executing the following commands: su - postgres -c 'mv /var/lib/pgsql/11/data/pg_hba.conf.prod /var/lib/pgsql/11/data/pg_hba.conf' su - postgres -c 'mv /var/lib/pgsql/13/data/pg_hba.conf.prod /var/lib/pgsql/13/data/pg_hba.conf' More details on the upgrade from PostgreSQL 11 to PostgreSQL 13 can be found in the PostgreSQL documentation. Start PostgreSQL 13 service. systemctl enable postgresql-13 systemctl start postgresql-13 Disable PostgreSQL 11: systemctl disable postgresql-11 If the pg_stat_statements extension is used, upgrade it using the script below as an example: do $$ begin -- if exists ( select 1 from pg_extension where extname = 'pg_stat_statements' and extversion = '1.6' ) then execute 'drop extension pg_stat_statements cascade'; execute 'create extension pg_stat_statements'; end if; -- end $$; \dx Create the views removed in step 5 of these instructions. To do this, run the scripts saved in step 5b that will the views in the OSS and BSS databases. If there are no pg-exporter-oss or pg-exporter-bss schemas, the scripts do not generate any exceptions and only log information messages. Use the following commands to run the scripts: su - postgres -c 'psql -U postgres -d pba -f /var/lib/pgsql/create_view_bss.sql' su - postgres -c 'psql -U postgres -d oss -f /var/lib/pgsql/create_view_oss.sql' Reindex and analyze the new database cluster. su - postgres -c '/usr/pgsql-13/bin/reindexdb --all --jobs=8' su - postgres -c '/usr/pgsql-13/bin/vacuumdb --all --jobs=8 --analyze-in-stages' If you use CloudBlue core components of version 21.16 or later, skip this step. If a cast was used in PostgreSQL 11, create it in PostgreSQL 13: /usr/pgsql-13/bin/psql -U postgres -d postgres -t -c "select * from setup_varchar2uuid(false)" Start CloudBlue applications that use PostgreSQL. Depending on the installation, the component list can be different. Use this command as an example: kubectl scale statefulset --replicas=1 oss-node kubectl scale --replicas=1 deployment/idp Check the applications' health. In case of an issue, refer to the Disaster Recovery section. Repair DB Replication after upgrade to PostgreSQL 13: Identify master DB replica hosts. This is a sample command to do this: grep -E "^host\s*replication\s*all\s*([0-9\.]*)" /var/lib/pgsql/11/data/pg_hba.conf |sed -E "s/.*all\s*([0-9\.]*).*/\1/" 10.26.230.128 Before restoring the replica, set the proper value for the wal_keep_size configuration parameter and it defines the size of the WAL logs to be kept on the master database server. In PostgreSQL 11 this parameter had the name wal_keep_segments. To calculate the value for the wal_keep_size parameter, multiply the wal_keep_segments value by the wal_segment_size value. Specify the result value as the value for the wal_keep_size parameter in the /var/lib/pgsql/13/data/postgresql.conf file. To apply the changes, reload the postgres-13 service systemctl reload postgresql-13 Login to the replication host and install PostgreSQL 13 there: yum localinstall ./postgresql13-13.6-1PGDG.rhel7.x86_64.rpm ./postgresql13-contrib-13.6-1PGDG.rhel7.x86_64.rpm ./postgresql13-libs-13.6-1PGDG.rhel7.x86_64.rpm ./postgresql13-server-13.6-1PGDG.rhel7.x86_64.rpm Stop and disable PostgreSQL 11 service: systemctl stop postgresql-11 systemctl disable postgresql-11 Make sure that there are no processes of PostgreSQL 11 running. To do this, execute this command on each server: ps -ef | grep "postgres"| grep -v "grep" Restore replication configuration for PostgreSQL 13 su - postgres -c '/usr/pgsql-13/bin/initdb --locale=en_US.UTF-8 --pgdata=/var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/pg_hba.conf /var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/pg_ident.conf /var/lib/pgsql/13/data' su - postgres -c 'cp /var/lib/pgsql/11/data/postgresql.conf /var/lib/pgsql/13/data' Make sure that there are no PostgreSQL 11 references in this file: /var/lib/pgsql/13/data/postgresql.conf. An example of such a reference is this one: data_directory = '/var/lib/postgresql/11/data'. Repair replication for PostgreSQL 13. To do this, create the /usr/local/bin/db-replica-repair-13.sh script with the content below, specifying the master database's IP address in the maindb parameter: maindb= ###### fill correct value for master database ###### echo "Stopping DB.." systemctl stop postgresql-11 echo "Backing up.." cp -f /var/lib/pgsql/13/data/postgresql.conf /root/postgresql.conf.bak cp -f /var/lib/pgsql/13/data/pg_hba.conf /root/pg_hba.conf.bak echo "Rebuilding DB.." rm -rf /var/lib/pgsql/13/data sudo -u postgres pg_basebackup -D /var/lib/pgsql/13/data -U repl -h $maindb -P -v -R -X fetch if [ "$?" != "0" ]; then echo "Failed." exit 1 fi echo "Restoring configs.." mv -f /root/postgresql.conf.bak /var/lib/pgsql/13/data/postgresql.conf echo -e "\nhot_standby=on\n" >> /var/lib/pgsql/13/data/postgresql.conf cp -f /root/pg_hba.conf.bak /var/lib/pgsql/13/data/pg_hba.conf chown postgres:postgres -R /var/lib/pgsql/13/data/ echo "Starting DB.." systemctl start postgresql-13 if [ "$?" != "0" ]; then echo "Failed." exit 1 fi sleep 5 echo "Verifying.." t1=$(sudo -u postgres psql -Aqtc "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))::INT"2> /dev/null) sleep 30 t2=$(sudo -u postgres psql -Aqtc "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp()))::INT"2> /dev/null) if (( $t2-$t1 >= 30 )); then echo "Replica rebuild failed. Please try again" exit 1 else echo "All done!" fi Run the script to fix replication: sh /usr/local/bin/db-replica-repair-13.sh Enable and start PostgreSQL 13 service systemctl enable postgresql-13 systemctl start postgresql-13 After repairing the replica, change the wal_keep_size parameter value to 8GB in the /var/lib/pgsql/13/data/postgresql.conf file of the master database and reload PostgreSQL configuration: systemctl reload postgresql-13 Known Issues Execution of the /usr/local/bin/db-replica-repair-13.sh script can stuck with the following message: pg_basebackup: initiating base backup, waiting for checkpoint to complete If this happens, create CHECKPOINT in the master database and execute the following on the main DB host: sudo -u postgres psql -c "CHECKPOINT;" Disaster Recovery In case of any issue during the upgrade process, you need to: Stop all applications connected to the PostgreSQL database as described in step 1. Stop and disable the postgresql-13 service: systemctl stop postgresql-13 systemctl disable postgresql-13 Restore the production pg_hba.conf: su - postgres -c 'mv /var/lib/pgsql/11/data/pg_hba.conf.prod /var/lib/pgsql/11/data/pg_hba.conf' Enable and start the postgresql-11 service: systemctl enable postgresql-11 systemctl start postgresql-11 For the BSS database, restore the "WITH OIDS" option in PostgreSQL 11 cluster. su - postgres -c 'psql -U postgres -d pba -f /var/lib/pgsql/alter_tables_without_oids.sql' Create the views removed in step 5 of the instructions above. To do this, run the scripts saved in step 5b that will the views in the OSS and BSS databases. If there are no pg-exporter-oss or pg-exporter-bss schemas, the scripts do not generate any exceptions and only log information messages. Use the following commands to run the scripts: su - postgres -c 'psql -U postgres -d pba -f /var/lib/pgsql/create_view_bss.sql' su - postgres -c 'psql -U postgres -d oss -f /var/lib/pgsql/create_view_oss.sql' Start all application services, as described in step 15. Restore the production pg_hba.conf: su - postgres -c 'mv /var/lib/pgsql/11/data/pg_hba.conf.prod /var/lib/pgsql/11/data/pg_hba.conf' Make sure to keep PostgreSQL 13 data for further investigation. This completes the recovery procedure. Restore the production pg_hba.conf: su - postgres -c 'mv /var/lib/pgsql/11/data/pg_hba.conf.prod /var/lib/pgsql/11/data/pg_hba.conf'
-
Understanding Ticket Impact and Urgency
CloudBlue Support has introduced a new way to determine the Priority of tickets by focusing on Impact and Urgency. This change aims to streamline the triage and resolution processes and ensure that issues are addressed more effectively. Impact Impact refers to the effect an incident has on your organization’s service levels. It measures how many customers, resellers or systems are affected by the issue. High: An incident that affects a large number of users or critical systems. Example, Platform-level issue affecting 33% or more customers. Medium: An incident that affects a moderate number of users or important systems. Example, Feature-level issue affecting 33% or more customers. Low: An incident that affects a small number of users or non-critical systems. Example, Single customer affected. Urgency Urgency indicates how quickly an incident needs to be resolved. It measures the time sensitivity of the issue. High: An incident that needs immediate attention because it significantly disrupts business operations. Example, A processing system failure affecting important and financial expressive orders causing potential revenue loss of 33% or more. Medium: An incident that needs to be resolved soon but is not immediately critical. Example, A processing system failure affecting less important and financial expressive orders causing potential revenue loss of lower than 33%. Low : An incident that can be resolved within a longer timeframe without causing significant disruption. Example, A minor issue with reports, dashboard or displayed information that does not affect daily operations and can be scheduled for a future update. How They Work Together The Priority of a ticket is now determined by combining Impact and Urgency. This is often represented in a Priority Matrix, where each combination of Impact and Urgency levels results in a specific Priority level (e.g., Low, Medium, High and Urgent). For example: Benefits of the New System This new approach offers several benefits: Consistency: Ensures that all tickets are prioritized based on a standardized assessment, reducing subjectivity. Efficiency: Helps your team focus on the most critical issues first, improving response times and service quality. Transparency: Provides a clear rationale for ticket prioritization, making it easier for customers to understand why certain issues are addressed sooner. By accurately assessing the Impact and Urgency of each incident, we can ensure that our team focuses on the most critical issues first, improving overall service efficiency.
-
Bulk Seamless Move Tool with Special Pricing and Cost
Bulk Seamless Move Tool with Special Pricing and Cost This document serves as a guide for partners to import subscriptions from the MSFT Partner Center in single or bulk with special pricing for CBSaaS. 1. Login to UX1 on country/OpCo level 2. Prepare below details needed to complete import via the Seamless Move tool a. CB Reseller ID b. CB Customer ID c. Customer Domain Name d. MSFT Subscription ID e. Reseller Cost (Optional) f. Customer Cost (Optional) g. Requirement whether billing is to be generated for the current billing period or not 3. With the prepared details above, fill in the text boxes below and go further until step #4 “Review Order Details”: a. We can add more resellers in the list from the first page so long as the target customer falls under the same Opco: b. Review the subscription information and click Next if there are no issues c. Verify that the correct plan has been selected and click Next 4. Here you will have the ability to export the subscription information to Exce. It will download a sheet file called Exported Data.xlsx. This contains the subscription’s order details. From this file, we could refer to below columns for reseller and customer cost per unit: a. Column R (Recurring_Reseller_Unit_Price) – reseller cost. b. Column T (Recurring_Customer_Unit_Price) – customer cost 5. After downloading the Excel file and adding the prices and costs, DO NOT click on Import. Instead, go to “Admin Operations. a. b. Select on Bulk import and click Go: c. From this page, upload the modified Exported Data.xlsx.: d. This section has the option to select whether a billing order is to be generated for the current billing period or not. 6. The last step is the migration summary which also provides a downloadable file containing the subscriptions that have been migrated to the platform:
-
How to change plan identification during PLM import from plan names to unique IDs
In version 3.0, Product Lifecycle Management (PLM) supports plan identification using unique IDs of the plans. Previously, identification was done using plan names. The change of identification method is done by product line. To change it for plans based on a product line, complete the following steps: Note: PLM must be updated to version 3.0 or later before completing these instructions. Access a workstation with access to the Kubernetes cluster, where your CloudBlue Commerce is deployed. Enter the OSS pod: kubectl exec -it oss-node-0 -n -- /bin/bash Switch to the build directory: cd build Copy the export.py script to that directory. Identify the UUID of the product line whose plans you need to update. Run the script, replacing the sample UUID with one of your product line: python export.py --productLineUUID=87ccdca6-dd0c-4648-a38e-e8df1437c1f0 As a result, a message similar to the following will appear: Completed successfully; the export file saved as product_line_87ccdca6-dd0c-4648-a38e-e8df1437c1f0.csv Configure IDs for the plans. Depending on the vendor, do this in one of the following ways: For Microsoft 365 products, download and run the uid_generator.py script file that completes the UIDs automatically. Remember to replace the sample UUID in the file name with the UUID of your product line: uid_generator.py script - python uid_generator.py --product-line-csv product_line_87ccdca6-dd0c-4648-a38e-e8df1437c1f0.csv For other products, obtain the PPR file that contains plan UIDs and update the file generated in step 7, specifying the UID for each plan based on the data from the PPR. For in-house products without a PPR file source, complete the PlanUID column in the file generated in step 7 with any unique values for each plan, for example, MPN. Copy the import.py script to the build directory on the OSS pod. Run the script replacing the UUID of the product line to the one of your product line: python import.py --productLineUUID=87ccdca6-dd0c-4648-a38e-e8df1437c1f0 --file=product_line_87ccdca6-dd0c-4648-a38e-e8df1437c1f0.csv From this moment, plan identification during import will be performed using plan IDs.
-
Correcting MOME Permissions
Granting Permissions to the API To ensure the proper functionality of the application, it is essential to grant the necessary API permissions. Follow the steps below to configure them correctly: Steps to Grant API Permissions Navigate to API Permissions Once the web application is created, go to API permissions in the Azure portal. Set the permissions for Microsoft Graph to Delegated Permissions. Select the Required Permissions Enable the following permissions: Access the directory as the signed-in user Sign in and read user profile Read Delegated Admin relationships with customers Manage Delegated Admin relationships with customers Add Partner Center API Permissions Go to the APIs my organization uses tab. Search for Microsoft Partner Center (identified with the following ID: 4990cffe-04e8-4e8b-808a-1175604b879f). Set Partner Center Permissions Assign Delegated Permissions to Partner Center. Additional Considerations In addition to these steps, ensure that the required API permissions are granted to the Microsoft SaaS App service in Azure. For further details, refer to the official Microsoft SaaS documentation.