Process 35,000 lines in text file with sed or bash while loop

I have ~35,000 record tuples to process. With my 5th-rate bash skills, processing a bash while loop takes six (6) hours. An expert bash person (with sed) could likely reduce elapsed time to 15 minutes, -and that’s what I need. Are you that person ?

Would sed suffice, or do I need a bash while loop?

I’m beginning to suspect that php would be a better tool than bash. Am I correct?

The text file “temp.txt”

SMITH,PAUL 31299410 06/30/2024 PA 672/04
JONES,MAHVER 30942745 01/31/2024 MI 2007/17 1839/17

CHANGE TO: Added bordering quotes to first token, and “,” changed to ", "
Also need MM/DD/YYYY (token 3) converted to YYYY-MM-DD for easier sorting.

“Smith, Paul” 31299410 2024-06-30 PA 672/04
“Jones, Mahver” 30942745 2024-0-311 MI 2007/17 1839/17

My not-so-bright bash solution is verbose, slow, and richly deserves a discard
while [ $q > $limit ];
do

first_token=“$(echo ${lines[$q]} | cut -d ’ ’ -f 1)”;

buzz=“$(echo $first_token | cut -d ‘,’ -f 1)”;
ring=“$(echo $first_token | cut -d ‘,’ -f 2)”;

S1=“$(echo $buzz | awk ‘{print tolower($0)}’)”;
S2=“$(echo $ring | awk ‘{print tolower($0)}’)”;

capName=“$(echo $S1 | awk ‘{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) substr($i,2)} 1’)”
lowName=“$(echo $S2 | awk ‘{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) substr($i,2)} 1’)”

bun=“"$capName, $lowName”"; echo “$bun” >> names.txt; #echo $bun

(( q++ ))
done

3 awks

awk ‘{ print “"”$1"" " $2" “$4” “$3” “$4” “$5” "$6 }’ complex_change.txt | awk -F’,’ ‘{split($1, fields, “,”); print fields[1] ", " $2}’ | awk ‘{split($5, date_parts, “/”); $5 = date_parts[3] “-” date_parts[1] “-” date_parts[2]; print $1 " " $2" " $3" " $4" " $5" " $6 " " $7}’ > complex_new_change.txt

Output:

“SMITH, PAUL” 31299410 PA 2024-06-30 PA 672/04
“JONES, MAHVER” 30942745 MI 2024-01-31 MI 2007/17