Jump to content

OpenAI ChatGPT is a gamechanger for FM immersion


Recommended Posts

14 hours ago, Mars_Blackmon said:

I changed 1 BPD to a CD (Even though 2 BPD works well in the game, ratemytatcic doesn't like it), and I changed the CAM from attacking to support. That's hardly half the tactic. the offside trap works, but I was suggested to use it situationally. Same with counter and counter press. But we all know that pretty much works with any high-press tactic.

Most importantly, no one is forcing you to use it.

Original tactic had two WBs instead of FBs.

But the last line is pointless: that's not what this is about. I'm not you looking for a tactic, but anyone using chatGPT would be, and the tactic they would get is flawed. Simple as that. Now if it knew about the game, was integrated with the game's logic etc. it would be a game-changer: but for now it's pretty flawed and not that close to improving the status quo of FM

Link to post
Share on other sites

  • Replies 214
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

22 minutes ago, The3points said:

Original tactic had two WBs instead of FBs.

But the last line is pointless: that's not what this is about. I'm not you looking for a tactic, but anyone using chatGPT would be, and the tactic they would get is flawed. Simple as that. Now if it knew about the game, was integrated with the game's logic etc. it would be a game-changer: but for now it's pretty flawed and not that close to improving the status quo of FM

Error on my part, but it's still 5*

You can recreate it with my few edits to confirm if that matters.

 

And what do you mean by "Know about the game" It has data on the game up until 2021, and if you use the web browser version, it can retrieve data from the internet. We aren't in the phase of AI thinking for itself. Is that the disconnect here? Machine learning is the first phase of AI. Were are years if not decades from the AI you see in movies at a commercial level...

 

Screenshot 2023-10-05 at 8.20.11 PM.png

Edited by Mars_Blackmon
Link to post
Share on other sites

I still don't think you get the point that has been communicated for the entirety of this forum. The tactic, in my opinion, is subpar: the wings are exposed, attacking play one-dimensional. 

"Know the game" means knowing exactly what each click does to the game engine. "Know the game" is understanding what factors change when you restart the game. "Know the game" means having 100% access to the formulas that tempo, and directness change, when these forums have three different definitions for each. Right know, ChatGPT "knows the game" as much as an intermediate player who gets the words and what they mean but not much of the significance and how they act in-game.image.thumb.png.7399bd5fe73c0e6cb6a2bc4a5aa599c3.pngThis is what I see but anyway no matter, this post this tactic on a forum anywhere, reddit, here wherever, they'll say that it is too attacking, contradicting styles, etc. For example many would ask why don't you have sweeper keeper for high line, that the wing is too aggressive, and so on and so forth. Especially in the original tactic, which while you say it's a good base (and it is, the instructions work quite well), how many new FM players are going to change 3 roles? My point is treating chatGPT like a trusty resource is worse than using Wikipedia for an academic article: chatGPT realistically doesn't know what's it's talking about. If it was hooked up with how the game worked, the "secret formula", then it would be trusty because it does know what it's talking about

Link to post
Share on other sites

46 minutes ago, The3points said:

I still don't think you get the point that has been communicated for the entirety of this forum. The tactic, in my opinion, is subpar: the wings are exposed, attacking play one-dimensional. 

"Know the game" means knowing exactly what each click does to the game engine. "Know the game" is understanding what factors change when you restart the game. "Know the game" means having 100% access to the formulas that tempo, and directness change, when these forums have three different definitions for each. Right know, ChatGPT "knows the game" as much as an intermediate player who gets the words and what they mean but not much of the significance and how they act in-game.image.thumb.png.7399bd5fe73c0e6cb6a2bc4a5aa599c3.pngThis is what I see but anyway no matter, this post this tactic on a forum anywhere, reddit, here wherever, they'll say that it is too attacking, contradicting styles, etc. For example many would ask why don't you have sweeper keeper for high line, that the wing is too aggressive, and so on and so forth. Especially in the original tactic, which while you say it's a good base (and it is, the instructions work quite well), how many new FM players are going to change 3 roles? My point is treating chatGPT like a trusty resource is worse than using Wikipedia for an academic article: chatGPT realistically doesn't know what's it's talking about. If it was hooked up with how the game worked, the "secret formula", then it would be trusty because it does know what it's talking about

I'm not sure what you didn't include. Nonetheless, we can go all day about the best tactics to use and what each code is programmed to do in the game. The same conversations are happenngi on multiple FM forums about tactics people manually develop. The thing that matters is that it worked in the game. It won the league with a team that was predicted to be 4th. I am not looking for the next underdog tactic that takes a bottom team to #1. I am pointing out that it can put together a coherent tactic even more than the preset tactics do, which does not score well using guidetofm and ratemytactic calculations if those hold any weight. It's a usable tactic and not put together randomly.

That's the bottom line.

If I am a user who isn't good at developing tactics, that's an adult. Would I instead spend all day on forums waiting for a reply, get a response only to get told to use the search function, or value my time and get that information in minutes? I value my time,  so I would rather spend those wasted hours enjoying the game.

But to each thier own.

Edited by Mars_Blackmon
Link to post
Share on other sites

6 hours ago, Mars_Blackmon said:

And what do you mean by "Know about the game" It has data on the game up until 2021, and if you use the web browser version, it can retrieve data from the internet. We aren't in the phase of AI thinking for itself. Is that the disconnect here? Machine learning is the first phase of AI. Were are years if not decades from the AI you see in movies at a commercial level...

My issue with things like chatGPT aand the like are that they can't be critical of the source, and thus don't know how correct the information is. Only a few months ago the newspapers here in Norway made a big deal about how it suggested a neo nazi as a "Norwegian historical hero". Why? Because some idiot had edited some wikipedia page or something, and chatGPT took it as gospel and gave it back to anyone who asked about "Norwegian heros".

That is an example of how the current state of AI are only as good as the sources it can use. And since neither one of us know what sources it uses for FM, we can't say anything about the validity of it. chatGPT is a fun toy to play around with, and it can do a lot of good things, but I wouldn't trust the information there further than I can throw the servers it's hosted on without double checking everything down to the minute details.

Link to post
Share on other sites

Funny thing about chatGPT even the guys who created it don’t know how the neural network actually works. They know there are billions of calculations happening but can they say whether it’s heading in the right direction, no.

Some of those ChatGPT developers left to set up Antropic, their AI tool is a more creative version called Claude. Their interviews are fun.

Having used chatGPT extensively, I can say without hesitation that it’s just a glorious “auto-complete” tool that digs through information on the web which may not always be correct.

It’s fantastic for research if you are good at prompting it, ultimately though a subject matter expert has to authenticate and verify the answers. It is self learning, based on information that may or may not be right. And even researchers don’t even know how it ticks, it’s all trial and error. Like any tool in the hands of an idiot it’s an idiot.

 

Link to post
Share on other sites

5 hours ago, XaW said:

My issue with things like chatGPT aand the like are that they can't be critical of the source, and thus don't know how correct the information is. Only a few months ago the newspapers here in Norway made a big deal about how it suggested a neo nazi as a "Norwegian historical hero". Why? Because some idiot had edited some wikipedia page or something, and chatGPT took it as gospel and gave it back to anyone who asked about "Norwegian heros".

That is an example of how the current state of AI are only as good as the sources it can use. And since neither one of us know what sources it uses for FM, we can't say anything about the validity of it. chatGPT is a fun toy to play around with, and it can do a lot of good things, but I wouldn't trust the information there further than I can throw the servers it's hosted on without double checking everything down to the minute details.

The new feature with web browsing functionality lets it fact check itself by checking multiple reliable sources.

Link to post
Share on other sites

Just now, Mars_Blackmon said:

The new feature with web browsing functionality lets it fact check itself by checking multiple reliable sources.

That's an improvement, but seeing how we only months ago got suggested a neo nazi to be a hero here (which no one outside the neo nazi community would ever think) it shows that you still can't trust the responses.

Link to post
Share on other sites

It's still massively weakened by the dataset itself.  When it was first flavour of the month, it's getting certain fairly simple mathematical concepts spot on, and that has since drifted to returning stuff that is completely wrong.  Also wouldn't be entirely surprised if for every person there is out there making the service more reliable, there will be at least a couple working to undermine it, either for nefarious purposes, or just because it's funny to them.  

Link to post
Share on other sites

22 minutes ago, forameuss said:

It's still massively weakened by the dataset itself.  When it was first flavour of the month, it's getting certain fairly simple mathematical concepts spot on, and that has since drifted to returning stuff that is completely wrong.  Also wouldn't be entirely surprised if for every person there is out there making the service more reliable, there will be at least a couple working to undermine it, either for nefarious purposes, or just because it's funny to them.  

Not just people too! The more it's draws on the Internet as a data source, the more it hoovers up ******** blogspam written for SEO purposes by LLMs...

Edited by enigmatic
Link to post
Share on other sites

5 hours ago, Rashidi said:

Funny thing about chatGPT even the guys who created it don’t know how the neural network actually works. They know there are billions of calculations happening but can they say whether it’s heading in the right direction, no.

Some of those ChatGPT developers left to set up Antropic, their AI tool is a more creative version called Claude. Their interviews are fun.

Having used chatGPT extensively, I can say without hesitation that it’s just a glorious “auto-complete” tool that digs through information on the web which may not always be correct.

It’s fantastic for research if you are good at prompting it, ultimately though a subject matter expert has to authenticate and verify the answers. It is self learning, based on information that may or may not be right. And even researchers don’t even know how it ticks, it’s all trial and error. Like any tool in the hands of an idiot it’s an idiot.

 

I agree 100%

Link to post
Share on other sites

1 hour ago, XaW said:

That's an improvement, but seeing how we only months ago got suggested a neo nazi to be a hero here (which no one outside the neo nazi community would ever think) it shows that you still can't trust the responses.

Agree, it’s a technology that’s improving everyday whether it’s from users creating prompts, third party plugins or integrated features. A few months ago seems like a short amount of time but GPT3.5 is worthless compared to GPT4 and I don’t think 4 is free to public yet. A lot of the issues raised have been addressed by the things that I mentioned or is currently i process of addressing. 

They recently added the ability to insert images to analyze and people are already using it to to get code for UI. Its not something to rely on but just like I said with the tactics, it’s a good starting point.

 

When it comes to FM integration (if we are having that conversation) I think the the main issue is an ethics issue and not really the tech or costs. Businesses, large and small have already integrated this technology and if SI would to include a chat bot assistant, the data it would pull from would be from SI itself. 

The problem is that people will always try to break something and last thing SI need is a screen shot going viral of a conversation in the game that spits out something discriminatory.

Link to post
Share on other sites

4 hours ago, Mars_Blackmon said:

Agree, it’s a technology that’s improving everyday whether it’s from users creating prompts, third party plugins or integrated features. A few months ago seems like a short amount of time but GPT3.5 is worthless compared to GPT4 and I don’t think 4 is free to public yet. A lot of the issues raised have been addressed by the things that I mentioned or is currently i process of addressing. 

They recently added the ability to insert images to analyze and people are already using it to to get code for UI. Its not something to rely on but just like I said with the tactics, it’s a good starting point.

 

When it comes to FM integration (if we are having that conversation) I think the the main issue is an ethics issue and not really the tech or costs. Businesses, large and small have already integrated this technology and if SI would to include a chat bot assistant, the data it would pull from would be from SI itself. 

The problem is that people will always try to break something and last thing SI need is a screen shot going viral of a conversation in the game that spits out something discriminatory.

Yeah, I have no doubt it will, some day, be good enough to put more trust in, but only if the inputs are sanitized enough. As is, it needs, except in the use cases where one can control it, some manual verification to be relied on.

As for the use in FM, as a starting point, I'm all for it, but presenting it as competent...? I'd not go that far.

And yes, if SI used some sort of internal version of it where they can control the input, then sure, that could be a good thing, but there would also create the issue where they would have to limit how much that assistant will give away. After all, we don't want an assistant that can break the game for you, so do you link it to attributes? Preferences? Knowledge of the players? I think the system might cost more than it would taste, for SI that is. We as users would probably just be happy, but as the current cost of this, I have my doubts the ROI for an assistant in FM will be good enough yet.

And yes, the brigade of "lets see if we can make this bot sound racists" would not be far off, I'd imagine. It's not long since I saw some posted a screenshot of chatGPT creating rather racists stuff by using key words such as hypothetical, or fictitious, or the like. I can't even imagine the pain of foolproofing this stuff...

Link to post
Share on other sites

2 minutes ago, XaW said:

And yes, the brigade of "lets see if we can make this bot sound racists" would not be far off, I'd imagine. It's not long since I saw some posted a screenshot of chatGPT creating rather racists stuff by using key words such as hypothetical, or fictitious, or the like. I can't even imagine the pain of foolproofing this stuff...

...

Image

 

It's sad in a way.  If these kinds of machines are people, they're relatively smart but hopelessly naive, and they can't say no.  There was that Microsoft bot they unleashed on Twitter a while back, and they had to switch it off in less than 24 hours because it started parroting some really horrendous stuff (some of which was just imitating people, but other cases of it coming up with very questionable stuff of its own accord).  There's also the case of the German artist who managed to completely invalidate Google Maps by wheeling around a trolley with 99 GPS-enabled phones, tricking the system into thinking there was this incredible traffic jam.  And the fact that Amazon brought those delivery robots in in several towns, and people kept turning them onto their backs and generally mucking about with them.  Point is, for every cool piece of tech, there's going to be a lot of people who just want to see it burn to the ground so they can laugh at the embers.  And while you can argue that the Google Maps and Amazon examples aren't really too harmful, when it comes to something like AI, the consequences of it going off the rails are a little more serious, and only going to get more serious as they get more powerful.

But that's going off on a wild tangent, of course.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...