Deploy Application In Remote Cluster via IRSA
KubeRocketCI provides the capability to deploy applications securely using IAM Roles for Service Accounts (IRSA) in AWS EKS. This integration enables Kubernetes pods to assume IAM roles for secure and temporary access to AWS resources, eliminating the need for long-lived credentials. While the deployment process is streamlined for most users, the platform also supports advanced configurations for custom permissions and role management, ensuring flexibility for more complex scenarios.
Prerequisitesβ
To start using this approach, you need to have OIDC (OpenID Connect) already configured for your EKS cluster. This setup allows Kubernetes service accounts to securely assume IAM roles. For your convenience, please follow our documentation EKS OIDC With Keycloak. This setup seamlessly integrates OIDC with minimal effort.
Rolesβ
Cross-account interaction is performed through IRSA with a two-tiered IAM role setup:
- In AWS Account A, the EKS cluster runs a kuberocketci cd-pipeline-operator with service account.
- This service account obtains temporary credentials through IRSA, which are associated with the
AWSIRSA_\{cluster_name\}_CDPipelineOperator
role. AWSIRSA_\{cluster_name\}_CDPipelineOperator
can then assume theAWSIRSA_\{cluster_name\}_CDPipelineAgent
role in AWS Account B.AWSIRSA_\{cluster_name\}_CDPipelineAgent
configures the environment (Stage) by creating namespaces, generating service accounts, copying secrets, and preparing for deployment.
Required IAM Roles, and polices for KRCIβ
Trust policy for the initial IRSA role that the service account assumes.
View: AWSIRSA_{cluster_name}_CDPipelineOperator (AWS Account A)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": [
"system:serviceaccount:krci:edp-cd-pipeline-operator"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:AWSIRSA_{cluster_name}_CDPipelineOperator"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
Policy allows assuming roles in Account B.
View: AWSIRSA_{cluster_name}_CDPipelineAssume (AWS Account A)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<AWS_ACCOUNT_B_ID>:role/AWSIRSA_{cluster_name}_CDPipelineAgent"
}
]
}
Trust policy to control access to Account B resources.
View: AWSIRSA_{cluster_name}_CDPipelineAgent (AWS Account B)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_B_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:*"
}
}
}
]
}
Policy defines permissions for deployments.
View: AWSIRSA_{cluster_name}_CDPipelineClusterAccess (AWS Account B)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:AccessKubernetesApi"
],
"Resource": "arn:aws:eks:<AWS_REGION>:<AWS_ACCOUNT_B_ID>:cluster/<cluster-name>"
}
]
}
Required IAM Roles and Policies for ArgoCD Cross-Account Deploymentβ
This section outlines the necessary IAM roles and policies required for ArgoCD to manage Kubernetes clusters across AWS accounts securely. The setup follows AWS best practices by using IAM Roles for Service Accounts (IRSA) and cross-account access to limit privileges effectively.
This IAM role is used by ArgoCD to authenticate via OIDC and assume required permissions.
View: AWSIRSA_{cluster_name}_ArgoCDMaster (AWS Account A)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_B_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:*"
},
"StringEquals": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
}
}
}
]
}
This policy allows ArgoCD in Account A to describe and access the EKS cluster in Account B.
View: AWSIRSA_{cluster_name}_ArgoCDMasterClusterAccess (AWS Account A)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:AccessKubernetesApi"
],
"Resource": "arn:aws:eks:<AWS_REGION>:<AWS_ACCOUNT_B_ID>:cluster/<cluster-name>"
}
]
}
This role allows ArgoCD service accounts to assume permissions necessary for managing deployments in Account B.
View: AWSIRSA_{cluster_name}_ArgoCDAgentAccess (AWS Account B)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": [
"system:serviceaccount:argocd:argocd-application-controller",
"system:serviceaccount:argocd:argocd-applicationset-controller",
"system:serviceaccount:argocd:argocd-server"
],
"oidc.eks.<AWS_REGION>.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
}
}
}
]
}
This role enables ArgoCD to assume the necessary permissions within the EKS cluster in Account B.
View: AWSIRSA_{cluster_name}_ArgoCDAssume (AWS Account B)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<AWS_ACCOUNT_B_ID>:role/AWSIRSA_{cluster_name}_ArgoCDAgentAccess"
}
]
}
Add annotations to service accounts (Account A)β
Add annotations to cd-pipeline-operator service account (Account A)β
- patch
- Manifests
kubectl patch serviceaccount edp-cd-pipeline-operator -n krci \
-p '{"metadata": {"annotations": {"eks.amazonaws.com/role-arn": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"}}}'
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"
name: edp-cd-pipeline-operator
namespace: krci
After applying annotations to service accounts, it is necessary to restart the corresponding deployments to ensure new pods are created with the updated IAM roles configuration. Use the following command:
kubectl rollout restart deployment cd-pipeline-operator -n krci
Annotate Service Accounts in Kubernetes (Account A)β
Annotate the service accounts in the account where Argo CD is located with the corresponding role ARN:
- patch
- Manifests
kubectl patch serviceaccount argocd-application-controller -n argocd \
-p '{"metadata": {"annotations": {"eks.amazonaws.com/role-arn": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"}}}'
kubectl patch serviceaccount argocd-applicationset-controller -n argocd \
-p '{"metadata": {"annotations": {"eks.amazonaws.com/role-arn": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"}}}'
kubectl patch serviceaccount argocd-server -n argocd \
-p '{"metadata": {"annotations": {"eks.amazonaws.com/role-arn": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"}}}'
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
name: argocd-application-controller
namespace: argocd
---
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
name: argocd-applicationset-controller
namespace: argocd
---
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
name: argocd-server
namespace: argocd
After applying annotations to service accounts, it is necessary to restart the corresponding workloads to ensure new pods are created with the updated IAM roles configuration. . Use the following commands:
kubectl delete pod -l app.kubernetes.io/name=argocd-application-controller -n argocd
kubectl delete pod -l app.kubernetes.io/name=argocd-applicationset-controller -n argocd
kubectl delete pod -l app.kubernetes.io/name=argocd-server -n argocd
Define Argo CD Project for Remote Clusters (Account A)β
Update the Argo CD project to add a new destination for the remote cluster:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: krci
namespace: argocd
spec:
destinations:
- namespace: krci-*
server: https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com
Update aws_auth ConfigMap in Target Cluster (Account B)β
View: aws-auth-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- groups:
- "cd-pipeline-operator"
rolearn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"
username: "eksadminrole"
- groups:
- "system:masters"
rolearn: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
username: "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
Create ClusterCore and ClusterRoleBinding (Account B)β
- kubectl
- Manifests
kubectl create clusterrole cd-pipeline-rolebinding-access \
--verb=get,list,create,delete \
--resource=rolebindings.rbac.authorization.k8s.io \
--verb=create,get,list \
--resource=secrets
kubectl create clusterrolebinding cd-pipeline-operator-rolebinding-access \
--clusterrole=cd-pipeline-rolebinding-access \
--group=cd-pipeline-operator
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cd-pipeline-rolebinding-access
rules:
- verbs:
- get
- list
- create
- delete
apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- verbs:
- create
- get
- list
apiGroups:
- ''
resources:
- secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cd-pipeline-operator-rolebinding-access
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: cd-pipeline-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cd-pipeline-rolebinding-access
Clusters secret configurationβ
KuberocketCI IRSA cluster connection secret configurationβ
This configuration enables secure cluster connection using IAM Roles for Service Accounts (IRSA) in AWS. You can set it up using one of the following methods:
- KubeRocketCI portal
- Manifests
- External Secrets Operator
Navigate to KuberocketCI portal -> Configuration -> DEPLOYMENT -> CLUSTERS and click the + ADD CLUSTER fill in the following fields and click SAVE button:
- Cluster name : a unique and descriptive name for the new cluster (e.g., prod-cluster)
- Cluster Host : the clusterβs endpoint URL (e.g., example-cluster-domain.com);
- Authority Data : base64-encoded kubernetes certificate essential for authentication. Obtain this certificate from the configuration file of the user account you intend to use for accessing the cluster.
- Role ARN : arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator
apiVersion: v1
kind: Secret
metadata:
name: <cluster-name>-cluster
namespace: krci
labels:
app.edp.epam.com/cluster-type: irsa
app.edp.epam.com/secret-type: cluster
argocd.argoproj.io/secret-type: cluster
data:
config: >-
{
"server": "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com",
"awsAuthConfig": {
"clusterName": "<cluster-name>",
"roleARN": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<Base64-encoded CA certificate of the target cluster>"
}
}
name: "<cluster-name>"
server: "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com"
"<cluster-name>-cluster": {
"config": {
"server": "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com",
"awsAuthConfig": {
"clusterName": "<cluster-name>",
"roleARN": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_CDPipelineOperator"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<Base64-encoded CA certificate of the target cluster>"
}
},
"name": "<cluster-name>",
"server": "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com"
}
ArgoCD IRSA cluster connection secret configurationβ
- Manifests
- External Secrets Operator
apiVersion: v1
kind: Secret
metadata:
name: <cluster-name>-cluster
namespace: argocd
labels:
argocd.argoproj.io/secret-type: cluster
stringData:
config: |
{
"awsAuthConfig": {
"clusterName": "<cluster-name>",
"roleARN": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<Base64-encoded CA certificate of the target cluster>"
}
}
name: "<cluster-name>"
server: "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com"
"<cluster-name>-cluster": {
"config": {
"awsAuthConfig": {
"clusterName": "<cluster-name>",
"roleARN": "arn:aws:iam::<AWS_ACCOUNT_A_ID>:role/AWSIRSA_{cluster_name}_ArgoCDMaster"
},
"tlsClientConfig": {
"insecure": false,
"caData": "<Base64-encoded CA certificate of the target cluster>"
}
},
"name": "<cluster-name>",
"server": "https://EXAMPLED539D4633E53DE1B71EXAMPLE.gr7.<AWS_REGION>.eks.amazonaws.com"
}
After applying the configuration, you can verify the cluster connection ArgoCD
-> Settings
-> Clusters
-> <cluster-name>
:
Update KuberocketCI configmap add new clusterβ
To add cluster to the KuberocketCI platform click on kubernetes
icon -> Configuration
-> ConfigMap
-> edp-config
and add parameter available_clusters
in data with value <cluster-name>
and click Save & apply:
data:
available_clusters: <cluster-name>
Deploy application on new clusterβ
Create Deployment Flowβ
To create a deployment flow, follow the steps below:
-
Navigate to the Deployment Flows tab and click the + Create Deployment Flow button.
-
The Enter name tab of the Create Deployment Flow:
-
Enter the deployment flow name that will be displayed in the Deployment Flows list. Enter at least two characters, use the lower-case letters, numbers, and dashes.
-
Click the Next button to move onto the Add applications tab.
The namespace created by the environment has the following pattern combination: [KubeRocketCI namespace]-[environment name]-[stage name]. Please be aware that the namespace length should not exceed 63 symbols.
- The Component tab of the Environments menu is presented below:
- Click the Create button to finish deployment flow configuration and proceed with configuring environment.
Create IRSA cluster Environmentβ
-
On the Environments menu, click the Create Environment button.
-
The Configure Stage tab of the Create Stage menu is presented below:
Set the proper cluster options:
- Cluster - Choose the
<cluster-name>
to deploy the stage in; - Stage name - Enter the stage name;
- Description - Enter the description for this stage;
-
Click the Next button to move onto the Add quality gates tab.
-
Click the Create button to start the provisioning of the pipeline. cluster-irsa-krci-deployed-application.png