Free5gc to strongswan ipsec tunnel with VTI error

Hi, I am trying to establish ipsec tunnel between the two VM.
VM1 - enp0s3 10.0.0.8 (host only adaptor), enp0s8 192.168.86.38 (bridge adaptor) .
VM2 - enp0s3 192.168.84.7 (host only adaptor), enp0s8 192.168.86.39 (bridge adaptor) .

In VM2 I installed the free5gc and started the server using modified run.sh
run.sh:
#!/usr/bin/env bash

PID_LIST=()

#added
#######################################################
UPFNS=“UPFns”
EXEC_UPFNS=“sudo -E ip netns exec ${UPFNS}”

export GIN_MODE=release

Setup network namespace

sudo ip netns add ${UPFNS}

sudo ip link add veth0 type veth peer name veth1
sudo ip link set veth0 up
sudo ip addr add 60.60.0.1 dev lo
sudo ip addr add 10.200.200.1/24 dev veth0
sudo ip addr add 10.200.200.2/24 dev veth0

sudo ip link set veth1 netns ${UPFNS}

{EXEC_UPFNS} ip link set lo up {EXEC_UPFNS} ip link set veth1 up
{EXEC_UPFNS} ip addr add 60.60.0.101 dev lo {EXEC_UPFNS} ip addr add 10.200.200.101/24 dev veth1
${EXEC_UPFNS} ip addr add 10.200.200.102/24 dev veth1
########################################################

cd NFs/upf/build
sudo -E ./bin/free5gc-upfd &
PID_LIST+=($!)

sleep 1

cd …/…/…

NF_LIST=“nrf amf smf udr pcf udm nssf ausf”

export GIN_MODE=release

for NF in {NF_LIST}; do ./bin/{NF} &
PID_LIST+=($!)
sleep 0.5
done

#added
sudo ip tunnel add ipsec0 mode vti local 192.168.86.39 remote 192.168.86.38 key 5
sudo sysctl -w net.ipv4.conf.ipsec0.disable_policy=1
sudo ip address add 192.168.84.8/32 remote 10.0.0.9/32 dev ipsec0
sudo ip link set ipsec0 up
#sudo ip route add 10.0.0.0/24 dev ipsec0

sudo ./bin/n3iwf &
SUDO_N3IWF_PID=! sleep 1 N3IWF_PID=(pgrep -P $SUDO_N3IWF_PID)
PID_LIST+=($SUDO_N3IWF_PID $N3IWF_PID)

function terminate()
{
sudo kill -SIGTERM {PID_LIST[{#PID_LIST[@]}-2]} {PID_LIST[{#PID_LIST[@]}-1]}
sleep 2
sudo ip netns del ${UPFNS}
sudo ip xfrm policy flush
sudo ip xfrm state flush
sudo ip link del veth0 type veth peer name veth1
sudo ip addr del 60.60.0.1 dev lo
sudo ip addr del 10.200.200.1/24 dev veth0
sudo ip addr del 10.200.200.2/24 dev veth0
sudo ip link del ipsec0 type vti local 192.168.86.39 remote 192.168.86.38 key 5
sudo ip tunnel del ipsec0 mode vti local 192.168.86.39 remote 192.168.86.38 key 5
}

trap terminate SIGINT
wait ${PID_LIST}

I was able to run 5g core, and start the ike message exchange between the VM1 to VM2 using 192.168.86.38 to 192.168.86.39. I added some code in the n3iwf core so that it can only send 10.0.0.9 as the subnet ip address to the strongswan.
As a result, topology in this connection was
10.0.0.9 (VTI) – 192.168.86.38 =====ipsec tunnel==== 192.168.86.39 – 192.168.84.8 (VTI)

but after free5gc applied xfrm rules and ipsec tunnel was established, I get this error

“No NAS signalling session found, retry…”
Still, I was able to see in the strongswan that ipsec tunnel was established, and ping (ping -I vti0 192.168.84.8)was being sent from VM1 (strongswan) to VM2 (free5gc).
but when I tried to ping (ping -I ipsec0 10.0.0.9) from VM2 (free5gc) to VM1(strongswan) Destination was unreachable and both src and dest was the same as 192.168.84.8. I changed the n3iwfcfg.yaml to use specific interface and changed the ipsecinterfaceaddress to 192.168.84.8.
What is the problem? and how can I fix to be able to establish the tunnel correctly?